Image

PuppetConf 2014

Glad to be at PuppetConf with #ShadowSoft exploring all the latest and greatest in PuppetLabs.

IMG_0050-0.JPG

Permanent link to this article: http://questy.org/2014/09/puppetconf-2014/

Sep 11

Southeast Puppet User’s Group September

puppet_docker

 

John Ray is bringing the Puppet + Docker goodness in his talk tonight: “Deploying Docker Containers with Puppet”.  Join us each month at the Shadow Soft offices for the latest in DEVOPS topics and information.  Always fun, lots of discussion and information surrounding Puppet topics and associated technologies.  There’s always pizza and beverages of all kinds, and we’ve finally moved into our new meeting/class rooms, so come on out.

 

Permanent link to this article: http://questy.org/2014/09/southeast-puppet-users-group-september/

Jun 11

The Toolbox Grows…

So far we’ve gotten our heads around some important things.  First and foremost, vim.  Our editor and companion for creating great code and ways to see our code in action and be able to determine at a glance whether our syntax is correct.  Also, we’ve looked at revision control.  The single largest “CYA” ohmygodimgladivegotanoldercopytorestoreto sort of paradigm where you can roll yourself back to previously “known good” revisions to save that day…besides that, it’s just darned good practice to keep your code externally saved, revision controlled, and accessible.

I’ve also talked about importance of workflow clarity and quality.  If you implement a poor workflow, you just have an automated poor workflow. Key word here is “poor”.

Next up on our browse through the “toolbox” is “Vagrant”.  What is this Vagrant, you ask?

Virtualization is paramount in today’s world in a number of ways and for a number of reasons.  For extending your server farms to handle even more application expression, to expand your own desktop machines to test/try different operating systems, and even just rolling up an ad-hoc VM so you can try something without touching a “real” machine in your environment.

Some may disagree, but I’ve found virtualization to be one of the most powerful tools added to the toolbox in years.  Not only can you prototype systems or applications, but you can prototype entire environments.  This is where Vagrant shines, and especially in the context of Puppet (master + clients), allows you to create a fully functioning Puppet environment upon which to develop, prototype, and test without ever jeopardizing even the least important system of your infrastructure.  I count that as a “win”.  Let’s see what this tool can do.

What *is* Vagrant?

According to its website:

Vagrant provides easy to configure, reproducible, and portable work environments built on top of industry-standard technology and controlled by a single consistent workflow to help maximize the productivity and flexibility of you and your team.

To achieve its magic, Vagrant stands on the shoulders of giants. Machines are provisioned on top of VirtualBox, VMware, AWS, or any other provider. Then, industry-standard provisioning tools such as shell scripts, Chef, or Puppet, can be used to automatically install and configure software on the machine.

There’s a lot there, but it’s just a fancy way of saying exactly what I said before.  Vagrant is essentially a framework system that wraps your virtualization engine to manage environments of VMs.  Here is where Vagrant will hold the power for us.

Virtualization

If Vagrant is the framework, then Virtualization is the foundation.  Now, I’ve chosen to use “Virtualbox” for my virtualization technology, but VMWare works every bit as well.  I am doing all my testing over Virtualbox, however, so YMMV.  Virtualbox is freely available from oracle, and you can download the appropriate version from Virtualbox at https://www.virtualbox.org.  I am running the latest version at 4.3.12 (as of this writing) and it serves the Vagrant system extremely well.

Vagrant

Next, you’ll need to install Vagrant on your system.  You can find all the right packages at http://www.vagrantup.com.  I am currently running version 1.6.3 without errors.

Warning!

I want to make a disclaimer here since I’ve had an issue or two with Vagrant on a platform I don’t use-Windows.  I am a Mac & Linux user, and have had no issues using the Vagrant/Virtualbox combo on either of these.  However, literally every time I’ve used Vagrant over Windows, it’s just been a mess.  I’ve known one person (ONE!) who has gotten Vagrant to work over Windows, and it required his getting into the product, editing code, etc.  As such, I wouldn’t recommend it for those new to the platform.

On the Mac platform, you get a .dmg file and can extract it run the installer.  Linux versions are available as RPM installs and Debian Packages.  Once you’re installed, let’s mess around a bit with Vagrant to see what we can do.

Getting Started

Vagrant is a unique tool in that it allows you to manage all these varied VMs, but adds a twist.  The big twist is that you don’t have to have the source materials for the VMs you’re installing.  In fact, the simplicity of turning up a new VM is astounding.  Take the following series of commands:

cd <your favorite directory>
mkdir precise32
cd precise32
vagrant init hashicorp/precise32
vagrant up

If your Vagrant is installed correctly, a number of things start to happen.  First, Vagrant places a file in your cwd called “Vagrantfile”.  Your vagrant file (indie) looks like this:

# -*- mode: ruby -*-
# vi: set ft=ruby :

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  # All Vagrant configuration is done here. The most common configuration
  # options are documented and commented below. For a complete reference,
  # please see the online documentation at vagrantup.com.

  # Every Vagrant virtual environment requires a box to build off of.
  config.vm.box = "hashicorp/precise32"

  # Disable automatic box update checking. If you disable this, then
  # boxes will only be checked for updates when the user runs
  # `vagrant box outdated`. This is not recommended.
  # config.vm.box_check_update = false

  # Create a forwarded port mapping which allows access to a specific port
  # within the machine from a port on the host machine. In the example below,
  # accessing "localhost:8080" will access port 80 on the guest machine.
  # config.vm.network "forwarded_port", guest: 80, host: 8080

  # Create a private network, which allows host-only access to the machine
  # using a specific IP.
  # config.vm.network "private_network", ip: "192.168.33.10"

  # Create a public network, which generally matched to bridged network.
  # Bridged networks make the machine appear as another physical device on
  # your network.
  # config.vm.network "public_network"

  # If true, then any SSH connections made will enable agent forwarding.
  # Default value: false
  # config.ssh.forward_agent = true

  # Share an additional folder to the guest VM. The first argument is
  # the path on the host to the actual folder. The second argument is
  # the path on the guest to mount the folder. And the optional third
  # argument is a set of non-required options.
  # config.vm.synced_folder "../data", "/vagrant_data"

  # Provider-specific configuration so you can fine-tune various
  # backing providers for Vagrant. These expose provider-specific options.
  # Example for VirtualBox:
  #
  # config.vm.provider "virtualbox" do |vb|
  #   # Don't boot with headless mode
  #   vb.gui = true
  #
  #   # Use VBoxManage to customize the VM. For example to change memory:
  #   vb.customize ["modifyvm", :id, "--memory", "1024"]
  # end
  #
  # View the documentation for the provider you're using for more
  # information on available options.

  # Enable provisioning with CFEngine. CFEngine Community packages are
  # automatically installed. For example, configure the host as a
  # policy server and optionally a policy file to run:
  #
  # config.vm.provision "cfengine" do |cf|
  #   cf.am_policy_hub = true
  #   # cf.run_file = "motd.cf"
  # end
  #
  # You can also configure and bootstrap a client to an existing
  # policy server:
  #
  # config.vm.provision "cfengine" do |cf|
  #   cf.policy_server_address = "10.0.2.15"
  # end

  # Enable provisioning with Puppet stand alone.  Puppet manifests
  # are contained in a directory path relative to this Vagrantfile.
  # You will need to create the manifests directory and a manifest in
  # the file default.pp in the manifests_path directory.
  #
  # config.vm.provision "puppet" do |puppet|
  #   puppet.manifests_path = "manifests"
  #   puppet.manifest_file  = "site.pp"
  # end

  # Enable provisioning with chef solo, specifying a cookbooks path, roles
  # path, and data_bags path (all relative to this Vagrantfile), and adding
  # some recipes and/or roles.
  #
  # config.vm.provision "chef_solo" do |chef|
  #   chef.cookbooks_path = "../my-recipes/cookbooks"
  #   chef.roles_path = "../my-recipes/roles"
  #   chef.data_bags_path = "../my-recipes/data_bags"
  #   chef.add_recipe "mysql"
  #   chef.add_role "web"
  #
  #   # You may also specify custom JSON attributes:
  #   chef.json = { mysql_password: "foo" }
  # end

  # Enable provisioning with chef server, specifying the chef server URL,
  # and the path to the validation key (relative to this Vagrantfile).
  #
  # The Opscode Platform uses HTTPS. Substitute your organization for
  # ORGNAME in the URL and validation key.
  #
  # If you have your own Chef Server, use the appropriate URL, which may be
  # HTTP instead of HTTPS depending on your configuration. Also change the
  # validation key to validation.pem.
  #
  # config.vm.provision "chef_client" do |chef|
  #   chef.chef_server_url = "https://api.opscode.com/organizations/ORGNAME"
  #   chef.validation_key_path = "ORGNAME-validator.pem"
  # end
  #
  # If you're using the Opscode platform, your validator client is
  # ORGNAME-validator, replacing ORGNAME with your organization name.
  #
  # If you have your own Chef Server, the default validation client name is
  # chef-validator, unless you changed the configuration.
  #
  #   chef.validation_client_name = "ORGNAME-validator"
end

Note that this is a long file with a lot of explanatory documentation.  In actuality, the most important part of your Vagrantfile can be summed up here:

# -*- mode: ruby -*-
# vi: set ft=ruby :

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "hashicorp/precise32"
end

These are the lines that are uncommented plus the top two declaratives that tell Vagrant what to do.  It’s a very simple file that does some very powerful things.  First, it checks your home directory in the ~/.vagrant.d location to see if you already have the “precise32″ Vagrant source “box”.  (more on boxes later).  Next, if you do have this, it will simply start up a VM in your virtualization of choice with a randomized name.  For instance, mine is called “precise32_default_1402504453444_30545″.  Vagrant takes away the selection of an .iso image, connecting it to the virtual CD/DVD Rom, starting an installer, etc.  It simply sends you a pre-rolled image, places it in your .vagrant.d directory, and provisions the VM to respond to Vagrant commands, and starts it up within Virtualbox.  Precise32 is simply a test scenario, as Vagrant’s site has quite a number of varied and specially configured “box” files that you can use to prototype on at their “ready-made” box discovery site: https://vagrantcloud.com/discover/featured.  You can install boxes with too many variations and differentiations to enumerate here, and that’s not really the point for our purposes… you may find these of great assistance in your own workplace, but let’s continue.

When you run your “vagrant init” command listed above, it places a Vagrantfile, and when you do a “vagrant up”, it automatically retrieves your box file, provisions, and starts the VM.  Now, by simply running “vagrant ssh default”, you are now logged into this virtual machine!  You also have full sudo to become root and do any sort of damage you may wish to do.  If you logout (“exit” or CTRL-D), and type “vagrant destroy”, the VM goes away and you have nothing in Virtualbox.

Were we to just stop here, the power inherent in being able to just have these “Vagrantfiles” (sort of like a “Makefile” for boxes) to spin up and down test scenarios at will is incredible.  But, let’s look at this in light of the Vagrantfile, what it can do and how you can customize it.  There is an entire descriptive language surrounding Vagrant PLUS Vagrant has a plugin infrastructure whereby developers can extend Vagrant’s capabilities.  We will capitalize on these later.

So, imagine a scenario where you can create a directory, copy a text file into it, run a single command, and it automatically provisions a 4-node Puppet Enterprise infrastructure, fully installed with a master and three agents, MCollective fully installed, PuppetDB installed and in use…  literally a full installation just like you would use for your infrastructure…  Now we get powerful.  NOW we have the ability to do some cool things.

Next time, that’s exactly what we’re going to do.

 

Permanent link to this article: http://questy.org/2014/06/toolbox-grows/

May 20

“Do’s and Dont’s” for your Puppet Environment

IT Automation, like the features and functions offered by Puppet, is riddled with a number of pitfalls. Nothing dangerous or site-threatening in the near term, however evolving a bad plan can lead you down a painful path to re-trek when you ultimately need to demolish what you’ve done and re-tool, re-work, or even re-start from scratch.  Some simple suggestions can help smooth your integration, and also provide tools and methodologies that make changes in philosophy easy to test and implement as well as make the long road back from a disaster easy(-ier?) to navigate.

Here are some simple guidelines that can provide that foundation and framework:

DO Always Use Revision Control

It would seem this would be a foregone conclusion in this day and age, but you would be surprised just how many shops don’t have revision control of any kind in place.  A series of manifests or configurations might be tarred up and sent to the backup system, but aside from dated backups, there’s no real versioning…just monolithic archives to weed through in a time of disaster.

Revision control puts you one command away from restoring those configurations and manifests (and even your data vis-a-vis “Hiera”) to their original locations in the most recent state.

DO Rethink Your Environments

If you automate a bad workflow, you still have a bad workflow.  (albeit an automated one!)

Rethink how you do things and why.  Why do you promote code the way you do, and is there a better way to do it?  Why do you still have a manual portion to your procedure, and is it entirely necessary, or can this be remanded to Puppet to do for you as well.  What things are you doing well?  How can they improve?

Try to think through all your procedures.  There are more than you think, and they’re often less optimized than they can be.  If you’re going to implement Puppet automation, it’s time to retool.

DO Implement Slowly and Methodically

Another pitfall a lot of shops wander into is they try to do too much all at once, and do none of it well.  Either they implement too quickly and migrate a huge environment it took years to build (sometimes as much as a decade!) through a single professional services engagement or at an unrealistic pace.  Automation is complex, but if you take the time to implement correctly, piece-by-piece and hand-in-hand with your rethinking of your environment referred to above, you can revolutionize the way you work and make the environment considerably more powerful, considerably easier to work with, and ultimately release yourself to work on much more interesting problems in your environment.  Take your time to build the environment you want.

DO Engage the Community

By using Puppet, you are the beneficiary of the greatest software development paradigm in history — the Open Source movement.  People all over the world have taken part in crafting the powerful tool you have before you.  If you are able to help in like manner, by all means contribute your code to the community. (With your data in Hiera, this is easier than ever!)  Join a Puppet Users Group.  Share your clever solutions to unique problems with the community via GitHub, the Puppet Forge, your website… give back.  The more you pour in, the more you get out, and something you solve may end up baked into the final product one day in the future.

DON’T Pit Teams Against Each Other

DON’T make this a DEV vs OPS paradigm.  This is a marriage of the best tools of both worlds.  Depending on how your culture breaks down, this could be an OPS-aware way of doing development, or a DEV-informed way of doing operations.  You need to remember one thing in all of it.  The marriage of these worlds is a teamwork effort.

I was averse to the term DEVOPS when it first started being used, as it was a tool of the development world I was engaged with to cede root level access to developers.  In a properly managed, secure environment, this is always a no-no.  Development personnel are not trained systems people and rarely are.  By the same token, never ask your systems people to delve into core development, or to troubleshoot your developers’ code.  They are not tooled for that work.

This does not say that one is better than the other, nor does it say they do not share a certain amount of core skills at the basest levels. Much like the differences between civil and mechanical engineers, each has a base level of knowledge that ties them together, but each is highly specialized.  You don’t want your civil engineer building machine tools just as much as you don’t want your mechanical engineer building bridges.  Each discipline is highly specialized and carries with it nuance and knowledge you only gain through experience…experience on the job.

Instead, find a culture and a paradigm that joins the forces of these two disciplines to build something unique and special rather than wasting time with dissension and argument.

DON’T Expect Automation to Solve Everything

I know, that sounds like a sacrilege at this point, but its true.  No matter how automated your site becomes, how detailed your configuration elements are, or how much you’ve detailed your entire workflow, you still can never replace the element of human consideration and decision-making.

Automation, as I’ve said before, automates away the mundane to make time for you, DEVOPS person, to work on really interesting and curious work.  You can now write that entire new whiz bang gadget you’ve been conceptualizing for the last several years, but have never quite gotten there because you were too busy “putting out fires”.  Puppet automation is definitely a watershed in modern administration and development, but people are still needed.

Another “intangible” you may not readily think about when considering a DEVOPS infrastructure is one of culture.  The best places to work are always the best cultures brought about by the right collection of people, ideas, personalities, and management styles.  When you find that right mix of people and ideas, the workplace becomes a, forgive me, magical place to be.  Automation can never make that happen.

DON’T Starve Your Automation Environment

Automation solves a lot of things, but one thing it cannot do is feed itself.  This particular animal has a ton of needs over time.  From appropriate hardware to personnel, the environment needs time, attention, and consideration.  Remember that this is the “machine tool” of your whole company.  It is the thing that builds and maintains other things.  As such, its priority rises above that of the next web server or DNS system.

Always allocate enough resources (read: money, personnel, and time) to your environment.  If that means engineer time to work on a specially project and to do the job right, that’s what it means.  And, yes, it’s more important than meeting an arbitrarily assigned “live date” to your new widget or site or application.  The environment comes first, and all else follows.  If you give the resources and time to your automation initiatives it deserves, a number of years down the road you will look back and be amazed at the sheer amount of work your team was able to accomplish just by keeping this simple precept.

DON’T Stop Evolving

Never stop learning.  Never stop bettering yourself or your environment.  Always keep refactoring your code.  (i.e., if you wrote that Apache module 4 years ago, chances are good that what you’ve learned in the interim can go back into making that module even better.)  Always keep your people trained and engaged on the latest developments in Puppet and all the associated tools.  Never stop striving to be better and never stop reaching.  I may sound lil your coach from high school in this, but those principles he was trying to impart hold true.  If you continue to drive forward and reinvent yourself as a regular part of your forward pursuits, the endpoint of that evolution will benefit you personally, your team both vocationally and culturally, your company’s efficiency, and your environment’s impact on your bottom line.

Conclusion

If we keep a stronger eye on our environment and tools that rises above the simple concept of “that software I bought” and “fit it in between all the other things you have to do” and give Puppet its proper place in our company, it can truly revolutionize your workflow.  However, when properly placed culturally and from a design, implementation, and workflow perspective, it can transform any shop on levels not readily observable when looking at the price tag or the resource requirements list.  DO let Puppet transform your environment and workflow and DON’T be afraid to take the plunge.  It’s exciting, challenging, and can easily take your company to the “next level”.

Permanent link to this article: http://questy.org/2014/05/dos-donts-puppet-environment/

May 13

GitHub, Git, and Just Plain Revision Control

One of the “bugaboos” in the sysadmin world for the longest time was the reluctance to use those “stinky developer tools” in our world for any reason.  I’m not sure the impetus behind this, but my wager is on something akin to security or yet another open port or “attack vector” if you will.  But today’s competent and conscientious systems admin (not to mention DEVOPS person) will use revision control as their go-to standard for collecting, versioning, backing up, and distributing all manner of things.

I’ve seen some shops use CVS as their choice, old thought it is, just as a large “bucket” in which to throw things for safekeeping with revisions and rollbacks available in case of some uncertain as yet unencountered event.  Subversion was the next generation of revision control tools.  Darling of developers and bane of disk space, Subversion still had many more features and performed essentially the same task.

Now, Git is the flavor of the month, and not only has gained widespread acceptance as a standard way to “do” revision control, it’s the de-facto way to do DEVOPS in  a Puppet world.  Granted, there are those brave souls out there who have tried to stick with the older tools, but the workflow and the “glue” between all the various components therein.  Hence, this post.

What is Git, really?

Git was developed by Linus Torvalds for Linux Kernel collaboration.  He needed a new revision control system akin to the previously used BitKeeper software that was unencumbered by copyright and able to handle the unique distributed development needs of the Linux project.  So, rather than try and use someone else’s project, he collated what was needed and developed the project himself.

Now, Git is used both privately and Publicly throughout the world for many projects.  Git is lightweight and works in a more efficient manner by moving changes via diffs rather than whole repositories, allows developers to maintain and manage an entire repository on their own systems either connected or disconnected from the Internet.  Then, they can “push” all their changes back to the central repository as needed.

Enter GitHub

For our purposes, we’ll specifically be working in GitHub.  GitHub is a project offering web-based hosting of your code that you can source from anywhere.  GitHub offers public and private hosting and a spate of other related services to development collaboration on the Internet.  If you do not have a GitHub account, you’ll need to surf on over to the site and sign up for one.  It’s free and it’s fast, and I’ll be using and sourcing it heavily as this series continues.

Basic Git

Git itself is available on most modern platforms and can easily hook into GitHub for our purposes.  I will be mostly referring to command-line usage of git, but you will find quite a bit in the way of tools, frontends, and “helper” apps for Git that you may or may not wish to leverage as you learn and incorporate Git into your workflow.  In the meantime, stick with me on command-line work.

When you install git on your unix-like platform, it will drop a few binaries.  The one we’re most interested in is the git binary itself.  It’s very simply designed and has a very straightforward set of options you can get from the command line by simply typing “git” with no options, or “git help”.  The output is below:

usage: git [--version] [--help] [-C <path>] [-c name=value]
           [--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]
           [-p|--paginate|--no-pager] [--no-replace-objects] [--bare]
           [--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>]
           <command> [<args>]

The most commonly used git commands are:
   add        Add file contents to the index
   bisect     Find by binary search the change that introduced a bug
   branch     List, create, or delete branches
   checkout   Checkout a branch or paths to the working tree
   clone      Clone a repository into a new directory
   commit     Record changes to the repository
   diff       Show changes between commits, commit and working tree, etc
   fetch      Download objects and refs from another repository
   grep       Print lines matching a pattern
   init       Create an empty Git repository or reinitialize an existing one
   log        Show commit logs
   merge      Join two or more development histories together
   mv         Move or rename a file, a directory, or a symlink
   pull       Fetch from and integrate with another repository or a local branch
   push       Update remote refs along with associated objects
   rebase     Forward-port local commits to the updated upstream head
   reset      Reset current HEAD to the specified state
   rm         Remove files from the working tree and from the index
   show       Show various types of objects
   status     Show the working tree status
   tag        Create, list, delete or verify a tag object signed with GPG

'git help -a' and 'git help -g' lists available subcommands and some
concept guides. See 'git help <command>' or 'git help <concept>'
to read about a specific subcommand or concept.

We’re most interested in a small subset of commands for our purposes here.  They are add, commit, pull, push, branch, checkout, and clone.

I will be referencing one particular way to “do” git which works for me, but as with anything TMTOWTDI and YMMV.

GitHub Portion

I am going to assume you’ve created a GitHub Account.  When you create your account, you’ll have a unique URL assigned to you based on your username.  Mine, for instance, is https://github.com/cvquesty/<insert project name here>.  The basic interface to GitHub is rather straightforward and looks like the following:

mygithubThe interface keeps track of all projects you’re working on, the frequency with which you commit or otherwise use your repository, and (most importantly), a centralized server that is storing those projects you can source from any internet connected system.

Make a Repository

In the upper right-hand corner of your screen, you’ll notice a “+” symbol.  Let’s click that and create us a new repository.  You’ll be presented with a dialog to name and describe your new repo.  I’ll use the name “sample repo” and the description “Sample Repo for my Tutorial” with no other options other than the defaults.  (we’ll go manually through those processes shortly).  After creating the repository by clicking “Create Repository”, I’m presented with a page that has step-by-step instructions on what to do next.  I’ll include that here for you.

myrepo

As you can see, you have your repo referenced at the top by <userid>/<reponame>.  You have instructions on how to use the repo from both the GitHub desktop client and the command line and some special instructions for if you have it locally on your system, and are just now uploading that content into this repository you’ve created to hold it.  We’re interested in the command line instructions.

A Place to Git

On my system (a Mac), I have Git installed by default and I have a directory in my home directory simply called “Projects”.  Under there, I have a “Git” directory.  ALL of my work in Git goes here.  This is not a hard/fast rule.  I just chose it as my location to place all my git work so it is centralized and all collected together.

What we’re going to do next is to configure Git, create a location for our repo, make a file to commit to the repo and then push that file up to GitHub to see how that workflow works.  Let’s get started.

Configuring Git

Since Git is personal to you as a user, you need to let Git know a few things about you.  This gives your git server (in our case GitHub) the information it needs when you’re pushing code (like your identity, default commit locations, etc).  First, your name and email:

git config –global user.name “John Doe”
git config –global user.email you@yourmail.com

You’ll only need to perform this once.  There are quite a few options and you can read up on those here at your leisure.

Next, create a location for your new repo.  I chose my aforementioned directory and created the location

/Users/jsheets/Projects/Git/samplerepo

for demonstration purposes.  From here, though, we can take up with the instructions on the GitHub page displayed after creating your repo. I’ll reproduce that here for reference:

cd /Users/jsheets/Projects/Git/samplerepo
touch README.md
git init
git add README.md
git commit -m “first commit”
git remote add origin https://github.com/cvquesty/samplerepo.git
git push -u origin master

If all has gone well, you have now created an empty README.md file, committed it to your local Git repository and then subsequently pushed it up to GitHub.  You’ll note that we added “origin” as the remote and then we pushed to “origin” in a thing called “master”.  What’s that all about?

GitHub (and git) refer to their repo location as “origin”.  This becomes handy when you start pushing between remote repositories and from remote to remote to GitHub, etc.  So, it makes sense to name GitHub functionally as well as it’s assigned domain name.  By saying “origin”, we’re making GitHub the de-facto standard center of everything we’re doing.

Next, we refer to “master”.  What is that?  Simply stated, we’re pushing to a “branch” called “master”.

Branching

Branching is a method by which you can have multiple code “branches” or “threads” in existence simultaneously, and Git is managing them all for you.  For instance, you may wish to have one code collection only for use in production systems while maintaining a separate one for development systems.  In fact, you can create a random branch with a bug name (bug1234, for instance), commit your changes to that, test it, and push it to origin, then pull it down to all your production hosts, solving a big problem in your site or codebase.  Better yet, if it all works great and you’re happy with it, you can “merge” that bug back into your main code repository, making it a permanent fixture in your code in whatever branch you like. (or even all of them!)

When you first create your repo, GitHub makes a “main” branch for you automatically, and calls it “master”.  So, by utilizing the command above, we’re telling Git to push our code (in this case, README.md) to our origin server (GitHub) and put it in the “master” branch.

While we’re on the topic, let’s create two more branches so we can get the full hang of this branching thing.  (Hint:  It’s core to how we integrate this into Puppet).

Makin’ Branches

As I said before, GitHub creates a default “master” branch for you.  If, from your local repository location, you type “git branch”, Git will list a single branch for you,

git branch
* master

This tells us simply, what branch you are currently in.  Now, let’s run two commands to create new branches to be tracked by Git.

git branch production
git branch development

Now.  Run “git branch” again:

git branch
development
* master
production

As you can see, your other branches are now visible when running the command.  If you have color, you may notice that the “master” branch is a different color than the others (based on your settings).  If you do not have color, the asterisk denotes what branch is active as well.

Checkout and Commit, Branch and Merge

We have our repository and we have our branches.  We have a single README.md in the current directory, and we are ready to roll committing code and pushing it into our repository.  Let’s perform a simple experiment to get the “hang” of how the branches work and how to switch between them as needed.  Since we’re in “master”, let’s edit our README.md to reflect that by placing a single word in the file “master”. (use vim as discussed in our last tutorial).

Once you’re done with your edit, you’ll see that the text is in the file.  you can edit it and you can cat the file and see the contents, but if you view the file up at GitHub, that content is not there yet.  Some sort of way, a mechanism must be used to put that data there.  Well, there is such a process, and it is a two part process.

Recall I mentioned that one of the features of Git is that you can have a complete repository local to your machine.  you can work on that repo and make all sorts of changes completely disconnected from your server (in our case GitHub… “origin” as it is named to Git).  Therefore, in reality you are dealing with not one, but two repositories.  The local one on your machine and the remote one at origin.  (remember the “git remote add origin” above?)

So, to finalize your changes locally, you must “commit” them to your local repository as “final”.  THEN, you can “push” those changes into your main server (in our case GitHub).   We did as much above with our procedure where we did the commit with a message, and then a push up to origin.  However, now that we’ve made changes locally, they are not yet reflected at GitHub.  Logic would dictate another commit is in order:

git commit
or
git commit -m ‘Some message about your commit’

As you can see, there are two routes you can go.  If you simply supply the “git commit” without any options, you will be brought into the system text editor you (or your OS) has configured in the $EDITOR environment variable.  Most platforms use “vi” or “vim” for this, but I have also seen “pico” used in some distributions like Ubuntu Linux.  In any event, you can edit the file by placing your comments in.  After exiting the file, saving the content, the commit will be complete.  If, however, you do not put anything, git will not commit the changes.  This is to enforce good coding practice by requiring some notes about what a committer is doing before making the changes.  It’s a highly recommended workflow to follow.

Once your commit is complete, phase 1 (local commit) is over.  You can commit over and over, as many times as you like.  you are a full, local repository.  In fact, I’d encourage many commits.  Commit when you think about it.  Commit before you walk away from your system.  Commit randomly for no reason in mid-workflow.  The more commits you have, the less likely you are to lose work.

Finally, to get the data up to GitHub, we need to “push” that data off your repository and into your “origin” repository. This is quite simple, and you’ve done it before:

git push -u origin master

Sometimes you may wish to not keep specifying the location you’re pushing to.  If so, you can set a default location for each branch.  Git will tell you just how to do that if you forget the “-u location branch” option.  Let’s say I’m in my aster branch and I simply run a “git push”.  Git will tell me I did something wrong, but will also tell me how to eliminate that problem:

fatal: The current branch master has no upstream branch.
To push the current branch and set the remote as upstream, use

git push –set-upstream origin master

“fatal” seems a little melodramatic since Git gives you the answer as to what to do right there.  All you need to do is set the default target once with that last line, and from that point forward, you only need type “git push” when pushing to GitHub.  Hint:  I do this in ALL my branches at create time.  It saves a lot of typing over time, and like any good Sysadmin, I’m lazy.  :)

So, now I’ve got multiple branches that need this setting, but I’m still stuck in “master”.  How do I get to “development” or “production” to perform the same tasks?

Git provides a “checkout” command.  What you’re saying with “checkout” is:  “Git, I want to be working on branch “x”, and I want you to make that my current branch.  if there are any differences between that branch and the one I’m on, please make those changes on-disk for me so I can exclusively be working in branch “x””.  A little verbose, but you get the point.  So, to move to the next branch and do all the wonderful things we did in “master” above, we perform:

git checkout development
edit README.md to say different text
git commit -a -m ‘editing README for development branch’
git push –set-upstream origin development
git push

If all has gone well, your development README.md file is now changed and pushed into GitHub.  What about “master”, though?  Well, let’s take a look:

git checkout master
cat README.md

If all has gone well, the contents of README.md are back to what was in your “master” branch.  By checking out “development”, it’ll change back to the new content there.  As a test, checkout the “production” branch, change the README.md file, commit it, set your upstream push target and then push the contents to GitHub.

Now you’re cooking with gas.

Conclusion

This is a simple tutorial to get you started with Git & GitHub.  There are MANY tutorials and books that can make you into a Git expert, but are way outside the scope of this humble little blog.  let me provide a few of those for you here:

Git Help
Git Book
GitHub Help

This documentation should be more than enough to get you moving and well underway with Git ins-and-outs for committing Puppet code and using r10k to interface with and distribute that code around your environment.

Permanent link to this article: http://questy.org/2014/05/github-git-just-plain-revision-control/

May 09

Why all the Vim?

As we move on through the last post on Vim, you may ask yourself why I’m remediating all the way back to text editors.  Well, as you’ll see over time, this is all about workflow; Building yourself a detailed workflow by which you can write code, syntax check, commit to revision control, deploy to your Puppet instances, and duplicate that workflow across all your environments.

The text editor itself, while important, is just a tiny part of a much larger picture I hope to cobble together over time.  So, let’s begin to push forward with our coverage of Vim.

Plugins and Syntax Highlighting

I went through the beginnings of Vim to give you a starting point and some basics in the event you have no experience in the Vim world.  One would assume that since you’re on a Puppet/Dev-Ops-y sort of page, all this is old news, but we do have completely “green” readers from time to time, and I didn’t want to leave them out.

The main goal in getting Vim in the picture was to bring you to the point where we start looking at our code and knowing what we’re dealing with at a glance.  As you begin to work in the field, whether coding in Perl, Python, Shell, or the Puppet DSL, there are some conventions out there all designed to help you and smooth your workflow.  Of these is syntax highlighting in code editors in general, but (for our purposes) in Vim specifically.

Take a look at this screen:

nosyn

 

While the code is well formatted and everything seems ok, were there any issues in this document, you’d never know it.  From syntax issues to missing elements, none of this is automatically highlighted to you in any way.  Enter syntax highlighting…  Look again:

syn

 

Much nicer, no?  Were there any missing elements, you’d see something amiss in the document.  The colors would not be organized according to element type, and odd things would be displayed in the page.  Let me “break” the file for you…

broke

 

 

You’ll notice that on line 4 something is amiss.  If you compare the two colored instances, you’ll see that your eye is drawn to where things begin to be different.  Best part for quick and easy glancing is that the entirety of the file after that one mistake now looks “wrong”.  Ease of view.

How Can This Help With Puppet?

As luck would have it, Vim has a plugin engine that allows you to have pre-built templates that syntax highlight code for you in a predetermined way.  It “recognizes” your code type, and highlights accordingly.  The basic plugin structure for vim lives in your home directory and in the “hidden” .vim directory.  Under this directory you can have a number of wide and varied add ons to vim.  We’re just going to talk about plugins.

By default, you don’t have anything in this directory.  You usually have a .vimrc file and a .vim directory in your home directory location, but that’s about it.  The “magic” happens, though, when you add a few pieces.  Those would include a “.vimrc” file which will turn on syntax highlighting, and then a Puppet vim plugin that sorts all the language elements and colorizes them for you.

In your home directory, if it doesn’t already exist, create a .vimrc file with a single entry:

syntax on

This will instruct vim to syntax highlight, and to be aware of any highlighting plugins that may live in the .vim plugin folder.  Next, place this file into your .vim plugin directory.  If it does not yet exist, create it.  It lives in ~/.vim/plugin.  Once you load up new puppet manifests, it will recognize the type and begin to highlight the code according to the defined convention from the puppet.vim file you just downloaded.  If you have any issues, look at the vim plugin reference here.

Now you’re ready to work with Puppet files like a pro!

 

Permanent link to this article: http://questy.org/2014/05/vim-2/

May 08

Puppet User’s Group

Going on right now at Shadow Soft!

20140508-195822.jpg

Excellent coverage of AWS automation.

Permanent link to this article: http://questy.org/2014/05/puppet-users-group/

May 02

Vim

Old School

Why on earth am I starting with Vim?  (or “vi” for you old folks)

Vim is the modern “vi” implementation.  A full-screen text editor with a myriad of options and abilities for beyond anything I could ever cover here.  But Vim has one thing going for it that no other text editor has.  One simple fact about it puts it in the category of cameras.  You know the old saying?

“The best camera is the one you have with you.”

Thus it is with vi/vim.  It’s literally everywhere.

Every UNIX OS, commercial or not, streamlined or not, old or new, has vi or vim installed on it.  Emacs is a great product, but it just isn’t installed by default everywhere.  Regardless of the editor you refer to, ${INSERT_EDITOR_HERE} just isn’t as ubiquitous as Vi/Vim.

Since we’re talking about a modern pursuit and workflow (DEVOPS), we’ll be talking mostly about Vim’s capabilities and features.

What Vim is NOT

Vim is not a word processor.  You won’t be writing business letters with it.  Vim is not for writing resumes, making pretty newsletters, or for typesetting a magazine.  The die-hard Vi/Vim fan will tell you that you can do all of the above with it, but that falls into the same category as filling in the Grand Canyon with a teaspoon. You can do it, but why on earth would you want to?

What Vim Does Best

Vim edits text.  Plain…text.  Not pretty bold, italicized, with all sorts of alignment characters and strange paragraphs and pagination doohickeys… no, Vim just makes text files.  Text files that are SO devoid of bells and whistles, in fact, that when you open a Vim created file in your favorite WYSIWYG editor, you’ll see what appears to be a pile of letters and such all crammed together like you had nothing else on your keyboard but letters, numbers, and punctuation.

Vim allows you to eliminate all the cruft and get right down to the matter of creating plain, unencumbered text files.

Why Does it Matter?

When you’re logged onto your favorite Linux through two bastion hosts across several continents and have latency to boot, WordPad will not help.  You’ll need a lightweight text editor within which you can load, edit, and save the single most numerous type of item on a UNIX system… a text file.

Where Can I learn More About Vim?

Vim’s main project page can be found here online.  The main page has links to documentation and various community links as well as connections to various types of plugins and add ons you can use with Vim for any number of tasks.  You can join online forums, mailing lists, and communities whose entire purpose is the extension and promotion of Vim.  But that’s not what we’re up to…

How We Will Use Vim

For our purposes, we will use vim as code editor.  No more, no less.  Vim’s abilities can help us see our code in ways that let’s us know when there’s an issue and can direct us generally in the direction of our solutions.  So, let’s dive in to a minor Vim tutorial.  (If you’re an advanced Vim user, stick with me…)

All Linux distributions and Mac OSX come with Vim pre-installed and ready for action.  Note that some distributions of Linux will have both Vim and Vi.  You will either need to get into the habit of running “vim” from the command line, or setup your shell aliases to load Vim every time you type “vi” instead.

When you launch Vim, you see a screen much like the following.  I use Mac OSX, but the effect is the same, regardless of platform.

vim_screen

I have several features turned on (including the line across the bottom that provides me a lot of information about the file I’m editing), but the main things we will talk about here are syntax highlighting and plugins after a short tutorial on how Vim works.

I Can’t DO Anything!!!

Most people’s frustrations begin right on this page.  From here, nothing seems familiar.  I can’t pull down a menu and I can’t really even choose “exit” from a list of things to do.  The only real hint I have is on the screen above:

type  :q<Enter>               to exit

…which is quite a peculiar directive.  Why do I have to type a colon?  What does it mean?  And goodness help you if you already managed to type something into the screen.  The instructions you see above disappear, and without the right collection of keystrokes, you’re not getting out of Vim.  You’ll most likely just close the window and start looking for “notepad”.

Here’s where the tutorial starts.

Vim is what is known as a full-screen text editor.  It started back in the Amiga days, and was first released publicly in 1991.  Before Vi/Vim, to create text files, there were line-based text editors that only allowed you to see one line of a file at a time.  So, you really couldn’t work with huge files…it would be difficult to see file lines compared to each other or look at the whole flow of a subroutine you had written, or even to match wording or syntax from one section to another.

Enter the full-screen editor.

What the full-screen editor did was open a file on disk, reserving space in memory to hold the entire file, and then display to you a “window” into your file equal to your terminal’s display size.  for instance, I have a terminal right now that has a several thousand line file open.  Of that file, I can only see 25 lines and 80 columns wide.  This “window” onto my file is something I personally configured in my terminal program (in my case, “iTerm”).  I can scroll up and down this file, sliding forward or backward within the file from the beginning to the end (just as you may an MS Word document) and can interact with/edit any character I can see on the screen.  (we’ll talk about search and replace and other such things later).

We are currently in what is known as “command mode”.  Many would ask why you don’t just call this “view” mode, and the reasons are very simple.  From this screen, you issue MANY commands to Vim and tell it how you want it to behave for you.  For our purposes, we will use “command” mode and “insert” mode mostly.

Well, How Do I Edit Something?

To edit your file, you enter into a mode known as “insert” mode.  From this page there are several different modes you can enter, but “insert” mode is the easiest.  You simply type a single lowercase “i” to enter this mode.  When you do so, a cursor appears on the top line of the page you are viewing, and you are now able to type all you like.  Letters, <enter> keys, tabs… all normal typing idioms are available to you from this point.  How, you may ask, then do you save your work to disk?  I’m in this “insert” mode and don’t know how to save!

Think about what you wish to accomplish… you wish to issue the command “save” to Vim.  Command…  as in… “command mode”, perhaps?  Well, we have to go back into command mode, then, so we can issue some commands to Vim and exit the program.

Any time you are in Vim, your “saving grace” is your [esc] key.  Two taps on the escape key lways brings you back to command mode from anywhere.  Go ahead and try it.

You’ll now notice you have returned to “command mode” just as you were when you first opened Vim.  The only difference is that everything you typed is now on your screen and has not gone away.  Your “edit buffer” is full of a file you now can do things with.  You can save it, delete it, save it out as a specific file name… a myriad of normal file operations you may be used to from other software packages.

In our case, we want to just save the file.  However, when we opened Vim, we didn’t specify a file name to edit, we just opened Vim.  So, for all intents and purposes, we have an open buffer full of “stuff” and no file name to associate with it.  What we want to do now is to “write” the file to disk.  To do so, we have to issue commands to Vim in Command mode.

Command Mode

To issue commands to Vim, we have to tell it we are issuing it a command, otherwise you may hit a letter than means something else.  Recall that simply by hitting a lowercase “i” we placed vim into insert mode and then by hitting the [esc] key twice, we left it.  Clearly, there’s more to this editor than we can readily see, so how do we save the file?

When you have a buffer with text you would like to save, you have to first hit a colon “:”.  You’ll notice that Vim places the colon on the bottom line of your screen to the left, awaiting a command.  While there are several commands we can perform here (as well as joining multiple commands together), we will simply write the file right now.  To do so, while at the “colon prompt”, we simply type:

: w foo.txt

and press the “enter” key.  You will receive a message on the last line that lets you know the file has been written to disk:

“foo.txt” [New] 1L, 16C written

But I’m still in Vim.  What do I do now?

Just like “w” is a command, exiting the program also is a command.  Guessably so, it is the letter “q” for “quit”.  So, as before, you hit the colon key, then the letter q, and then the enter key.  If all goes well, you’ll be back at the command line.

Much More to This

Were I to do this for all the features of Vim, I’d be writing a book.  However, fortunately for you, there are several tutorials and cheat sheets on Vi/Vim all over the Internet.  Here are a few of my favorites.

The Main Vim Tutorial
Linux.com’s Vim Tutorial

As well as some cheat sheets for you to refer to for quick reference on the various commands available when using Vim:

One of My Favorites
Another Good Cheat Sheet

Take some time learning the basics of Vim before pressing on to the next article: “Customizing Vim”.

Permanent link to this article: http://questy.org/2014/05/vim/

May 02

New Tools and Old Schools

New Tools

Once you’re treading water in this whole DEVOPS thing, a lot of terms get thrown around and a lot of “newest bestest” gets offered up as the cure for everything including the kitchen sink… oh, and cancer.  However, I think when the hubbub is the loudest, that’s when I really take a step back and ask myself what we’re trying to do, and what’s the simplest, most repeatable and safest way to make it happen.

As with any sufficiently new technology, there’s a learning curve that accompanies such a shift, and new tools start to be speckle your horizon such that you wonder where to start, what is imperative and what is optional for you to occupy your time with.

Thus it is with Puppet-y stuff.

Given that the whole DEVOPS thing includes within it a heavy lean toward DEV, the regular rank-and-file sysadmin may find himself thrust into a world of development terms he may have only heard in passing in some random meeting or other.  The tools of the developer are as varied and arrayed as those for the operations guru and every bit as arcane (in some cases) as the esoteric shell command only installable from some guy in Russia’s repo, with 30 or more command line switches to get a single piece of data upon which to work your evil schemes.

Well… that’s why this blog.

I’m about to embark upon a coverage of a set of tools.  However, this set of tools I will not observe in a vacuum, but in light of a powerful workflow engine with which to empower you to become considerably more efficient in writing Puppet code.  The tools in particular I’d like to go over and, much in the same way I covered Puppet Open Source, instruct you step-by-step on how to install, configure, and use said tools are as follows:

Vagrant
r10k
GitHub
Vim (yes, Vim)
puppet-lint
Gepetto

This is by no means and exhaustive list, but what it does is collect together the best tools to assemble a workflow that will speed your work and not leave you spending all your time working on tools, but working on Puppet Code, which is our main focus and goal.

Old Schools

At the end of the day, the really important things are conceptual.  Whether you use a new whizbang tool to do the heavy lifting, several tools working together to achieve this goal, or you heavy lift all on your own, the process and the rules remain the same.

DO keep all your code in revision control.
DO syntax highlight and check your code validity
DO build repeatable, consistent environments in a timely fashion
DO have a method to share your work environments with others
DO document heavily both in and out of code segments
DO have a solid end-to-end workflow that enables rapid iteration

What of this is new?  Whether we’re talking about keeping all your shell scripts in CVS, deploying your script repo with SVN, or deploying versioned code caches with ${PACKAGE_MANAGEMENT_SYSTEM}, we’re talking about the same general good practices.  All Puppet and supporting tools does is make it something that is tightly integrated with management consoles.  To organize and institutionalize your workflow gives the steady underpinning to the cool DEVOPS tool to make everything repeatable, sharable, and collaborative.  This is where the power comes from.  THIS is where DEVOPS makes sense.

When your workflow is solid, your DEVOPS tools are strong, and your culture has bought in to both, mundane work becomes an afterthought and you get to work on the really interesting things that you’ve been letting slide while putting out fires that could’ve been best managed via configuration management anyhow.

My hope is to build the concept and context from the ground up to show you one tight, functional way this can be accomplished.  Let’s start next time with Vim.

Permanent link to this article: http://questy.org/2014/05/new-tools-old-schools/

Apr 23

Workflows, Tools, and a Myriad of Gobbledygook…

Ok, so first let’s cover the gobbledygook.

I’ve had a lot of feedback on various parts of the blog here, and I thought I’d address a few of them here.

Q:  Why didn’t you just point at the appropriate Yum repos for installing Puppet?

A:  Easy.  I can do “yum install foo” and never know what’s going on behind the scenes just like anyone.  My goal here was to give a point-by-point installation guide so those who are interested could know what all the moving parts were and how they fit together.

Q:  Why are you using the dashboard and not Foreman?

A:  Also easy… at least at the time of this writing, The Foreman has been indicated to be the next front end to be the “de facto” standard, but as for right now, the Enterprise Console is essentially a turbocharged version of the Dashboard.  As such, when I begin to talk about extending the Enterprise Console, Dashboard would be the analogue by which you can most obtain the same experience (short of installing Puppet Enterprise itself).

Q:  Why OSS and not PE?

A:  Well, to be frank… PE is about as simple as you can get.  You can learn a bit about what portions to install and how to connect them all together across machines (something I plan to cover), and you can learn about constructing an answer file (or several for a large, complex installation),  but the vast majority of your PE installations will be a Q/A interaction with the installer.

 

Workflows and Tools

One of the biggest changes in how I deal with Puppet has been my adoption of and implementation of workflows as well as using some modern tools I’ve been made aware of by the Puppet Labs folks.  Among these are Vagrant, r10k, GitHub, and many others that work together and are tightly integrated and require a lot of configuration and setup to “make happen”.  I intend to cover those here.

So, what are all these things?

Vagrant

Vagrant is a tool that allows you to pre-configure a small virtualized environment on your host consisting of a Puppet Master and any number of agents for use in a dynamic, iterative fashion.  By bringing up a Vagrant environment, I have a miniature development environment from which I can actually test Puppet code I write in a full PE environment and work out kinks you don’t normally encounter when working independently on a single workstation.  From this environment, I can puppet-lint check all my code and then push all that code out of the Vagrant environment up to GitHub, then pull it down to my production instance as needed.  Further, it allows me to simulate all environments I have in my corporate setup (DEV, PROD, TEST, etc.) and commit those differences to the appropriate branches in GitHub.

r10k

The method by which I iterate, push/pull code and deploy to various environments both in my Vagrant instance and in my Production site is via r10k.  A leap forward past “puppet librarian”, r10k is the glue that ties between GitHub and Vagrant as well as your GitHub instance and your Corporate site.  This can tie to public GitHub as well as Corporate (private) GitHub.

GitHub

GitHub is considerably more well known, but it’s important to the whole process as you can read above.  GitHub is a revision control repository designed for rapid deployment, iteration, and isolated developer work with periodic pushes back to GitHub. (distributed development).  I intend to do a simple tutorial on GitHub as well.

Look forward to piece-by-piece coverage of each of these and my thoughts as I prepare for and take the Puppet Labs Certification test.

Permanent link to this article: http://questy.org/2014/04/workflows-tools-myriad-gobbledygook/

Page 1 of 612345...Last »