Questy.org

Vagrant and Docker Love

| comments

Not a full-on post, but more a note for myself to both investigate and test this… It would seem that Vagrant has now added provisioner support for Docker here. Be looking for a new post on this in the near future!

I’m working steadily to start off the new year, so my posting may be somewhat sporadic, but I will continue to blog Puppet fundamentals, supporting tools, and related items as I have the chance.

And in that vein, I encountered something this week I wanted to share with you.

It would seem that the new feature whereby you can include facter facts in a module for pluginsync to distribute them, but using the new mechanism of:

/modulename/facts.d/external_fact

is not 100% reliable when distributing those facts. By this, I mean the following observed behavior.

You create your fact and place it in the above directory. Say, a shell script that gives you a value. You make your fact executable, and running it natively at the shell works perfectly. You do a puppet agent run, and the fact syncs to the agent machine, but never becomes available in the facter table. You find the sync’ed location of the fact (in the losgs from the sync) and run it manually, and it works perfectly. I spoke with some folks at Puppet just in a conversation describing what I was seeing and they suggested the following workaround:

Make the fact a file resource and place the fact in /etc/puppetlabs/facter/facts.d. This makes the fact available to the facter system, and displays correctly in the facter table and responds to the facter -p as expected.

That’s it for now! Look for more to come on workflows and tools in the very near future.

Moving my Tech Content to GitHub

| comments

Happy New Year, all!

Well, you may notice a bit of a change in format & layout. I got tired of fighting the foibles of WordPress. Every few weeks, WordPress decided quite on its own it no longer wished to display my blog for what appeared to be no reason at all.

As such, I’ve moved to Octopress, hosted it at GitHub, and am doing a permanent redirect at it from my site until I can work out both Web & Mail while having my MX stay where it is and having my Web address move to GitHub directly.

In the meantime, all tech posts have been duplicated here, and can be found by navigating the menus.

Note:

Over the next few weeks and months, you’ll see things move around, features being added and removed, themes and plugins changing and/or disappearing as I learn Octopress and figure out all its ins and outs. Please bear with my dist during this time.

PuppetConf 2014

| comments

Glad to be at PuppetConf with #ShadowSoft exploring all the latest and greatest in PuppetLabs.

Southeast Puppet User’s Group September

| comments

John Ray is bringing the Puppet + Docker goodness in his talk tonight: “Deploying Docker Containers with Puppet”.  Join us each month at the Shadow Soft offices for the latest in DEVOPS topics and information.  Always fun, lots of discussion and information surrounding Puppet topics and associated technologies.  There’s always pizza and beverages of all kinds, and we’ve finally moved into our new meeting/class rooms, so come on out.

The Toolbox Grows…

| comments

So far we’ve gotten our heads around some important things.  First and foremost, vim.  Our editor and companion for creating great code and ways to see our code in action and be able to determine at a glance whether our syntax is correct.  Also, we’ve looked at revision control.  The single largest “CYA” ohmygodimgladivegotanoldercopytorestoreto sort of paradigm where you can roll yourself back to previously “known good” revisions to save that day…besides that, it’s just darned good practice to keep your code externally saved, revision controlled, and accessible.

I’ve also talked about importance of workflow clarity and quality.  If you implement a poor workflow, you just have an automated poor workflow. Key word here is “poor”.

Next up on our browse through the “toolbox” is “Vagrant”.  What is this Vagrant, you ask?

Virtualization is paramount in today’s world in a number of ways and for a number of reasons.  For extending your server farms to handle even more application expression, to expand your own desktop machines to test/try different operating systems, and even just rolling up an ad-hoc VM so you can try something without touching a “real” machine in your environment.

Some may disagree, but I’ve found virtualization to be one of the most powerful tools added to the toolbox in years.  Not only can you prototype systems or applications, but you can prototype entire environments.  This is where Vagrant shines, and especially in the context of Puppet (master + clients), allows you to create a fully functioning Puppet environment upon which to develop, prototype, and test without ever jeopardizing even the least important system of your infrastructure.  I count that as a “win”.  Let’s see what this tool can do.

What is Vagrant?

According to its website:

Vagrant provides easy to configure, reproducible, and portable work environments built on top of industry-standard technology and controlled by a single consistent workflow to help maximize the productivity and flexibility of you and your team.

To achieve its magic, Vagrant stands on the shoulders of giants. Machines are provisioned on top of VirtualBox, VMware, AWS, or any other provider. Then, industry-standard provisioning tools such as shell scripts, Chef, or Puppet, can be used to automatically install and configure software on the machine.

There’s a lot there, but it’s just a fancy way of saying exactly what I said before.  Vagrant is essentially a framework system that wraps your virtualization engine to manage environments of VMs.  Here is where Vagrant will hold the power for us.

Virtualization

If Vagrant is the framework, then Virtualization is the foundation.  Now, I’ve chosen to use “Virtualbox” for my virtualization technology, but VMWare works every bit as well.  I am doing all my testing over Virtualbox, however, so YMMV.  Virtualbox is freely available from oracle, and you can download the appropriate version from Virtualbox at https://www.virtualbox.org.  I am running the latest version at 4.3.12 (as of this writing) and it serves the Vagrant system extremely well.

Vagrant

Next, you’ll need to install Vagrant on your system.  You can find all the right packages at http://www.vagrantup.com.  I am currently running version 1.6.3 without errors.

[warning]I want to make a disclaimer here since I’ve had an issue or two with Vagrant on a platform I don’t use-Windows.  I am a Mac & Linux user, and have had no issues using the Vagrant/Virtualbox combo on either of these.  However, literally every time I’ve used Vagrant over Windows, it’s just been a mess.  I’ve known one person (ONE!) who has gotten Vagrant to work over Windows, and it required his getting into the product, editing code, etc.  As such, I wouldn’t recommend it for those new to the platform.[/warning]

On the Mac platform, you get a .dmg file and can extract it run the installer.  Linux versions are available as RPM installs and Debian Packages.  Once you’re installed, let’s mess around a bit with Vagrant to see what we can do.

Getting Started

Vagrant is a unique tool in that it allows you to manage all these varied VMs, but adds a twist.  The big twist is that you don’t have to have the source materials for the VMs you’re installing.  In fact, the simplicity of turning up a new VM is astounding.  Take the following series of commands:

cd 
mkdir precise32
cd precise32
vagrant init hashicorp/precise32
vagrant up

If your Vagrant is installed correctly, a number of things start to happen.  First, Vagrant places a file in your cwd called “Vagrantfile”.  Your vagrant file (indie) looks like this:

-*- mode: ruby -*-
vi: set ft=ruby :
All Vagrant configuration is done below. The “2” in Vagrant.configure
configures the configuration version (we support older styles for
backwards compatibility). Please don’t change it unless you know what
you’re doing.
Vagrant.configure(“2”) do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://atlas.hashicorp.com/search.
config.vm.box = “hashicorp/precise32”
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# vagrant box outdated. This is not recommended.
# config.vm.box_check_update = false
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing “localhost:8080” will access port 80 on the guest machine.
# config.vm.network “forwarded_port”, guest: 80, host: 8080
# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network “private_network”, ip: “192.168.33.10”
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# config.vm.network “public_network”
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
# config.vm.synced_folder “../data”, “/vagrant_data”
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
# config.vm.provider “virtualbox” do |vb|
# # Display the VirtualBox GUI when booting the machine
# vb.gui = true
#
# # Customize the amount of memory on the VM:
# vb.memory = “1024”
# end
#
# View the documentation for the provider you are using for more
# information on available options.
# Define a Vagrant Push strategy for pushing to Atlas. Other push strategies
# such as FTP and Heroku are also available. See the documentation at
# https://docs.vagrantup.com/v2/push/atlas.html for more information.
# config.push.define “atlas” do |push|
# push.app = “YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME”
# end
# Enable provisioning with a shell script. Additional provisioners such as
# Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
# documentation for more information about their specific syntax and use.
# config.vm.provision “shell”, inline: <<-SHELL
# apt-get update
# apt-get install -y apache2
# SHELL
end

Note that this is a long file with a lot of explanatory documentation.  In actuality, the most important part of your Vagrantfile can be summed up here:

# -*- mode: ruby -*- 
# vi: set ft=ruby :
Vagrant.configure(“2”) do |config|
  config.vm.box = “hashicorp/precise32” 
end

These are the lines that are uncommented plus the top two declaratives that tell Vagrant what to do.  It’s a very simple file that does some very powerful things.  First, it checks your home directory in the ~/.vagrant.d location to see if you already have the “precise32” Vagrant source “box”.  (more on boxes later).  Next, if you do have this, it will simply start up a VM in your virtualization of choice with a randomized name.  For instance, mine is called “precise32_default_1402504453444_30545”.  Vagrant takes away the selection of an .iso image, connecting it to the virtual CD/DVD Rom, starting an installer, etc.  It simply sends you a pre-rolled image, places it in your .vagrant.d directory, and provisions the VM to respond to Vagrant commands, and starts it up within Virtualbox.  Precise32 is simply a test scenario, as Vagrant’s site has quite a number of varied and specially configured “box” files that you can use to prototype on at their “ready-made” box discovery site: https://vagrantcloud.com/discover/featured.  You can install boxes with too many variations and differentiations to enumerate here, and that’s not really the point for our purposes… you may find these of great assistance in your own workplace, but let’s continue.

When you run your “vagrant init” command listed above, it places a Vagrantfile, and when you do a “vagrant up”, it automatically retrieves your box file, provisions, and starts the VM.  Now, by simply running “vagrant ssh default”, you are now logged into this virtual machine!  You also have full sudo to become root and do any sort of damage you may wish to do.  If you logout (“exit” or CTRL-D), and type “vagrant destroy”, the VM goes away and you have nothing in Virtualbox.

Were we to just stop here, the power inherent in being able to just have these “Vagrantfiles” (sort of like a “Makefile” for boxes) to spin up and down test scenarios at will is incredible.  But, let’s look at this in light of the Vagrantfile, what it can do and how you can customize it.  There is an entire descriptive language surrounding Vagrant PLUS Vagrant has a plugin infrastructure whereby developers can extend Vagrant’s capabilities.  We will capitalize on these later.

So, imagine a scenario where you can create a directory, copy a text file into it, run a single command, and it automatically provisions a 4-node Puppet Enterprise infrastructure, fully installed with a master and three agents, MCollective fully installed, PuppetDB installed and in use…  literally a full installation just like you would use for your infrastructure…  Now we get powerful.  NOW we have the ability to do some cool things.

Next time, that’s exactly what we’re going to do.

“Do’s and Dont’s” for Your Puppet Environment

| comments

IT Automation, like the features and functions offered by Puppet, is riddled with a number of pitfalls. Nothing dangerous or site-threatening in the near term, however evolving a bad plan can lead you down a painful path to re-trek when you ultimately need to demolish what you’ve done and re-tool, re-work, or even re-start from scratch.  Some simple suggestions can help smooth your integration, and also provide tools and methodologies that make changes in philosophy easy to test and implement as well as make the long road back from a disaster easy(-ier?) to navigate.

Here are some simple guidelines that can provide that foundation and framework:

DO Always Use Revision Control

It would seem this would be a foregone conclusion in this day and age, but you would be surprised just how many shops don’t have revision control of any kind in place.  A series of manifests or configurations might be tarred up and sent to the backup system, but aside from dated backups, there’s no real versioning…just monolithic archives to weed through in a time of disaster.

Revision control puts you one command away from restoring those configurations and manifests (and even your data vis-a-vis “Hiera”) to their original locations in the most recent state.

DO Rethink Your Environments

If you automate a bad workflow, you still have a bad workflow.  (albeit an automated one!)

Rethink how you do things and why.  Why do you promote code the way you do, and is there a better way to do it?  Why do you still have a manual portion to your procedure, and is it entirely necessary, or can this be remanded to Puppet to do for you as well.  What things are you doing well?  How can they improve?

Try to think through all your procedures.  There are more than you think, and they’re often less optimized than they can be.  If you’re going to implement Puppet automation, it’s time to retool.

DO Implement Slowly and Methodically

Another pitfall a lot of shops wander into is they try to do too much all at once, and do none of it well.  Either they implement too quickly and migrate a huge environment it took years to build (sometimes as much as a decade!) through a single professional services engagement or at an unrealistic pace.  Automation is complex, but if you take the time to implement correctly, piece-by-piece and hand-in-hand with your rethinking of your environment referred to above, you can revolutionize the way you work and make the environment considerably more powerful, considerably easier to work with, and ultimately release yourself to work on much more interesting problems in your environment.  Take your time to build the environment you want.

DO Engage the Community

By using Puppet, you are the beneficiary of the greatest software development paradigm in history – the Open Source movement.  People all over the world have taken part in crafting the powerful tool you have before you.  If you are able to help in like manner, by all means contribute your code to the community. (With your data in Hiera, this is easier than ever!)  Join a Puppet Users Group.  Share your clever solutions to unique problems with the community via GitHub, the Puppet Forge, your website… give back.  The more you pour in, the more you get out, and something you solve may end up baked into the final product one day in the future.

DON’T Pit Teams Against Each Other

DON’T make this a DEV vs OPS paradigm.  This is a marriage of the best tools of both worlds.  Depending on how your culture breaks down, this could be an OPS-aware way of doing development, or a DEV-informed way of doing operations.  You need to remember one thing in all of it.  The marriage of these worlds is a teamwork effort.

I was averse to the term DEVOPS when it first started being used, as it was a tool of the development world I was engaged with to cede root level access to developers.  In a properly managed, secure environment, this is always a no-no.  Development personnel are not trained systems people and rarely are.  By the same token, never ask your systems people to delve into core development, or to troubleshoot your developers’ code.  They are not tooled for that work.

This does not say that one is better than the other, nor does it say they do not share a certain amount of core skills at the basest levels. Much like the differences between civil and mechanical engineers, each has a base level of knowledge that ties them together, but each is highly specialized.  You don’t want your civil engineer building machine tools just as much as you don’t want your mechanical engineer building bridges.  Each discipline is highly specialized and carries with it nuance and knowledge you only gain through experience…experience on the job.

Instead, find a culture and a paradigm that joins the forces of these two disciplines to build something unique and special rather than wasting time with dissension and argument.

DON’T Expect Automation to Solve Everything

I know, that sounds like a sacrilege at this point, but its true.  No matter how automated your site becomes, how detailed your configuration elements are, or how much you’ve detailed your entire workflow, you still can never replace the element of human consideration and decision-making.

Automation, as I’ve said before, automates away the mundane to make time for you, DEVOPS person, to work on really interesting and curious work.  You can now write that entire new whiz bang gadget you’ve been conceptualizing for the last several years, but have never quite gotten there because you were too busy “putting out fires”.  Puppet automation is definitely a watershed in modern administration and development, but people are still needed.

Another “intangible” you may not readily think about when considering a DEVOPS infrastructure is one of culture.  The best places to work are always the best cultures brought about by the right collection of people, ideas, personalities, and management styles.  When you find that right mix of people and ideas, the workplace becomes a, forgive me, magical place to be.  Automation can never make that happen.

DON’T Starve Your Automation Environment

Automation solves a lot of things, but one thing it cannot do is feed itself.  This particular animal has a ton of needs over time.  From appropriate hardware to personnel, the environment needs time, attention, and consideration.  Remember that this is the “machine tool” of your whole company.  It is the thing that builds and maintains other things.  As such, its priority rises above that of the next web server or DNS system.

Always allocate enough resources (read: money, personnel, and time) to your environment.  If that means engineer time to work on a specially project and to do the job right, that’s what it means.  And, yes, it’s more important than meeting an arbitrarily assigned “live date” to your new widget or site or application.  The environment comes first, and all else follows.  If you give the resources and time to your automation initiatives it deserves, a number of years down the road you will look back and be amazed at the sheer amount of work your team was able to accomplish just by keeping this simple precept.

DON’T Stop Evolving

Never stop learning.  Never stop bettering yourself or your environment.  Always keep refactoring your code.  (i.e., if you wrote that Apache module 4 years ago, chances are good that what you’ve learned in the interim can go back into making that module even better.)  Always keep your people trained and engaged on the latest developments in Puppet and all the associated tools.  Never stop striving to be better and never stop reaching.  I may sound lil your coach from high school in this, but those principles he was trying to impart hold true.  If you continue to drive forward and reinvent yourself as a regular part of your forward pursuits, the endpoint of that evolution will benefit you personally, your team both vocationally and culturally, your company’s efficiency, and your environment’s impact on your bottom line.

Conclusion

If we keep a stronger eye on our environment and tools that rises above the simple concept of “that software I bought” and “fit it in between all the other things you have to do” and give Puppet its proper place in our company, it can truly revolutionize your workflow.  However, when properly placed culturally and from a design, implementation, and workflow perspective, it can transform any shop on levels not readily observable when looking at the price tag or the resource requirements list.  DO let Puppet transform your environment and workflow and DON’T be afraid to take the plunge.  It’s exciting, challenging, and can easily take your company to the “next level”.

GitHub, Git, and Just Plain Revision Control

| comments

One of the “bugaboos” in the sysadmin world for the longest time was the reluctance to use those “stinky developer tools” in our world for any reason.  I’m not sure the impetus behind this, but my wager is on something akin to security or yet another open port or “attack vector” if you will.  But today’s competent and conscientious systems admin (not to mention DEVOPS person) will use revision control as their go-to standard for collecting, versioning, backing up, and distributing all manner of things.

I’ve seen some shops use CVS as their choice, old thought it is, just as a large “bucket” in which to throw things for safekeeping with revisions and rollbacks available in case of some uncertain as yet unencountered event.  Subversion was the next generation of revision control tools.  Darling of developers and bane of disk space, Subversion still had many more features and performed essentially the same task.

Now, Git is the flavor of the month, and not only has gained widespread acceptance as a standard way to “do” revision control, it’s the de-facto way to do DEVOPS in  a Puppet world.  Granted, there are those brave souls out there who have tried to stick with the older tools, but the workflow and the “glue” between all the various components therein.  Hence, this post.

What is Git, really?

Git was developed by Linus Torvalds for Linux Kernel collaboration.  He needed a new revision control system akin to the previously used BitKeeper software that was unencumbered by copyright and able to handle the unique distributed development needs of the Linux project.  So, rather than try and use someone else’s project, he collated what was needed and developed the project himself.

Now, Git is used both privately and Publicly throughout the world for many projects.  Git is lightweight and works in a more efficient manner by moving changes via diffs rather than whole repositories, allows developers to maintain and manage an entire repository on their own systems either connected or disconnected from the Internet.  Then, they can “push” all their changes back to the central repository as needed.

Enter GitHub

For our purposes, we’ll specifically be working in GitHub.  GitHub is a project offering web-based hosting of your code that you can source from anywhere.  GitHub offers public and private hosting and a spate of other related services to development collaboration on the Internet.  If you do not have a GitHub account, you’ll need to surf on over to the site and sign up for one.  It’s free and it’s fast, and I’ll be using and sourcing it heavily as this series continues.

Basic Git

Git itself is available on most modern platforms and can easily hook into GitHub for our purposes.  I will be mostly referring to command-line usage of git, but you will find quite a bit in the way of tools, frontends, and “helper” apps for Git that you may or may not wish to leverage as you learn and incorporate Git into your workflow.  In the meantime, stick with me on command-line work.

When you install git on your unix-like platform, it will drop a few binaries.  The one we’re most interested in is the git binary itself.  It’s very simply designed and has a very straightforward set of options you can get from the command line by simply typing “git” with no options, or “git help”.  The output is below:

usage: git [-v | –version] [-h | –help] [-C <path>] [-c <name>=<value>]
[–exec-path[=<path>]] [–html-path] [–man-path] [–info-path]
[-p | –paginate | -P | –no-pager] [–no-replace-objects] [–bare]
[–git-dir=<path>] [–work-tree=<path>] [–namespace=<name>]
[–super-prefix=<path>] [–config-env=<name>=<envvar>]
<command> [<args>]

These are common Git commands used in various situations:

start a working area (see also: git help tutorial)
clone Clone a repository into a new directory
init Create an empty Git repository or reinitialize an existing one

work on the current change (see also: git help everyday)
add Add file contents to the index
mv Move or rename a file, a directory, or a symlink
restore Restore working tree files
rm Remove files from the working tree and from the index

examine the history and state (see also: git help revisions)
bisect Use binary search to find the commit that introduced a bug
diff Show changes between commits, commit and working tree, etc
grep Print lines matching a pattern
log Show commit logs
show Show various types of objects
status Show the working tree status

grow, mark and tweak your common history
branch List, create, or delete branches
commit Record changes to the repository
merge Join two or more development histories together
rebase Reapply commits on top of another base tip
reset Reset current HEAD to the specified state
switch Switch branches
tag Create, list, delete or verify a tag object signed with GPG

collaborate (see also: git help workflows)
fetch Download objects and refs from another repository
pull Fetch from and integrate with another repository or a local branch
push Update remote refs along with associated objects

‘git help -a’ and ‘git help -g’ list available subcommands and some
concept guides. See ‘git help <command> ‘ or ‘git help <concept>’
to read about a specific subcommand or concept.
See ‘git help git’ for an overview of the system.


We’re most interested in a small subset of commands for our purposes here.  They are add, commit, pull, push, branch, checkout, and clone.

I will be referencing one particular way to “do” git which works for me, but as with anything TMTOWTDI and YMMV.

GitHub Portion

I am going to assume you’ve created a GitHub Account.  When you create your account, you’ll have a unique URL assigned to you based on your username.  Mine, for instance, is https://github.com/cvquesty/.  The basic interface to GitHub is rather straightforward and looks like the following:

The interface keeps track of all projects you’re working on, the frequency with which you commit or otherwise use your repository, and (most importantly), a centralized server that is storing those projects you can source from any internet connected system.

Make a Repository

In the upper right-hand corner of your screen, you’ll notice a “+” symbol.  Let’s click that and create us a new repository.  You’ll be presented with a dialog to name and describe your new repo.  I’ll use the name “sample repo” and the description “Sample Repo for my Tutorial” with no other options other than the defaults.  (we’ll go manually through those processes shortly).  After creating the repository by clicking “Create Repository”, I’m presented with a page that has step-by-step instructions on what to do next.  I’ll include that here for you.

As you can see, you have your repo referenced at the top by /.  You have instructions on how to use the repo from both the GitHub desktop client and the command line and some special instructions for if you have it locally on your system, and are just now uploading that content into this repository you’ve created to hold it.  We’re interested in the command line instructions.

A Place to Git

On my system (a Mac), I have Git installed by default and I have a directory in my home directory simply called “Projects”.  Under there, I have a “Git” directory.  ALL of my work in Git goes here.  This is not a hard/fast rule.  I just chose it as my location to place all my git work so it is centralized and all collected together.

What we’re going to do next is to configure Git, create a location for our repo, make a file to commit to the repo and then push that file up to GitHub to see how that workflow works.  Let’s get started.

Configuring Git

Since Git is personal to you as a user, you need to let Git know a few things about you.  This gives your git server (in our case GitHub) the information it needs when you’re pushing code (like your identity, default commit locations, etc).  First, your name and email:

git config --global user.name "John Doe"
git config --global user.email you@yourmail.com

You’ll only need to perform this once.  There are quite a few options and you can read up on those here at your leisure.

Next, create a location for your new repo.  I chose my aforementioned directory and created the location:

/Users/jsheets/Projects/Git/samplerepo

for demonstration purposes.  From here, though, we can take up with the instructions on the GitHub page displayed after creating your repo. I’ll reproduce that here for reference:

cd /Users/jsheets/Projects/Git/samplerepo
touch README.md
git init
git add README.md
git commit -m "first commit"
git remote add origin https://github.com/cvquesty/samplerepo.git
git push -u origin master

If all has gone well, you have now created an empty README.md file, committed it to your local Git repository and then subsequently pushed it up to GitHub.  You’ll note that we added “origin” as the remote and then we pushed to “origin” in a thing called “master”.  What’s that all about?

GitHub (and git) refer to their repo location as “origin”.  This becomes handy when you start pushing between remote repositories and from remote to remote to GitHub, etc.  So, it makes sense to name GitHub functionally as well as it’s assigned domain name.  By saying “origin”, we’re making GitHub the de-facto standard center of everything we’re doing.

Next, we refer to “master”.  What is that?  Simply stated, we’re pushing to a “branch” called “master”.

Branching

Branching is a method by which you can have multiple code “branches” or “threads” in existence simultaneously, and Git is managing them all for you.  For instance, you may wish to have one code collection only for use in production systems while maintaining a separate one for development systems.  In fact, you can create a random branch with a bug name (bug1234, for instance), commit your changes to that, test it, and push it to origin, then pull it down to all your production hosts, solving a big problem in your site or codebase.  Better yet, if it all works great and you’re happy with it, you can “merge” that bug back into your main code repository, making it a permanent fixture in your code in whatever branch you like. (or even all of them!)

When you first create your repo, GitHub makes a “main” branch for you automatically, and calls it “master”.  So, by utilizing the command above, we’re telling Git to push our code (in this case, README.md) to our origin server (GitHub) and put it in the “master” branch.

While we’re on the topic, let’s create two more branches so we can get the full hang of this branching thing.  (Hint:  It’s core to how we integrate this into Puppet).

Makin’ Branches

As I said before, GitHub creates a default “master” branch for you.  If, from your local repository location, you type “git branch”, Git will list a single branch for you,

git branch
* master

This tells us simply, what branch you are currently in.  Now, let’s run two commands to create new branches to be tracked by Git.

git branch production
git branch development

Now.  Run “git branch” again:

git branch
development *
master
production

As you can see, your other branches are now visible when running the command.  If you have color, you may notice that the “master” branch is a different color than the others (based on your settings).  If you do not have color, the asterisk denotes what branch is active as well.

Checkout and Commit, Branch and Merge

We have our repository and we have our branches.  We have a single README.md in the current directory, and we are ready to roll committing code and pushing it into our repository.  Let’s perform a simple experiment to get the “hang” of how the branches work and how to switch between them as needed.  Since we’re in “master”, let’s edit our README.md to reflect that by placing a single word in the file “master”. (use vim as discussed in our last tutorial).

Once you’re done with your edit, you’ll see that the text is in the file.  you can edit it and you can cat the file and see the contents, but if you view the file up at GitHub, that content is not there yet.  Some sort of way, a mechanism must be used to put that data there.  Well, there is such a process, and it is a two part process.

Recall I mentioned that one of the features of Git is that you can have a complete repository local to your machine.  you can work on that repo and make all sorts of changes completely disconnected from your server (in our case GitHub… “origin” as it is named to Git).  Therefore, in reality you are dealing with not one, but two repositories.  The local one on your machine and the remote one at origin.  (remember the “git remote add origin” above?)

So, to finalize your changes locally, you must “commit” them to your local repository as “final”.  THEN, you can “push” those changes into your main server (in our case GitHub).   We did as much above with our procedure where we did the commit with a message, and then a push up to origin.  However, now that we’ve made changes locally, they are not yet reflected at GitHub.  Logic would dictate another commit is in order:

git commit
or
git commit -m 'Some message about your commit'

As you can see, there are two routes you can go.  If you simply supply the “git commit” without any options, you will be brought into the system text editor you (or your OS) has configured in the $EDITOR environment variable.  Most platforms use “vi” or “vim” for this, but I have also seen “pico” used in some distributions like Ubuntu Linux.  In any event, you can edit the file by placing your comments in.  After exiting the file, saving the content, the commit will be complete.  If, however, you do not put anything, git will not commit the changes.  This is to enforce good coding practice by requiring some notes about what a committer is doing before making the changes.  It’s a highly recommended workflow to follow.

Once your commit is complete, phase 1 (local commit) is over.  You can commit over and over, as many times as you like.  you are a full, local repository.  In fact, I’d encourage many commits.  Commit when you think about it.  Commit before you walk away from your system.  Commit randomly for no reason in mid-workflow.  The more commits you have, the less likely you are to lose work.

Finally, to get the data up to GitHub, we need to “push” that data off your repository and into your “origin” repository. This is quite simple, and you’ve done it before:

git push -u origin master

Sometimes you may wish to not keep specifying the location you’re pushing to.  If so, you can set a default location for each branch.  Git will tell you just how to do that if you forget the “-u location branch” option.  Let’s say I’m in my aster branch and I simply run a “git push”.  Git will tell me I did something wrong, but will also tell me how to eliminate that problem:

fatal: The current branch master has no upstream branch. To push the current branch and set the remote as upstream, use

git push --set-upstream origin master

“fatal” seems a little melodramatic since Git gives you the answer as to what to do right there.  All you need to do is set the default target once with that last line, and from that point forward, you only need type “git push” when pushing to GitHub.  Hint:  I do this in ALL my branches at create time.  It saves a lot of typing over time, and like any good Sysadmin, I’m lazy.  🙂

So, now I’ve got multiple branches that need this setting, but I’m still stuck in “master”.  How do I get to “development” or “production” to perform the same tasks?

Git provides a “checkout” command.  What you’re saying with “checkout” is:  “Git, I want to be working on branch “x”, and I want you to make that my current branch.  if there are any differences between that branch and the one I’m on, please make those changes on-disk for me so I can exclusively be working in branch “x”“.  A little verbose, but you get the point.  So, to move to the next branch and do all the wonderful things we did in “master” above, we perform:

git checkout development
edit README.md to say different text
git commit -a -m 'editing README for development branch'
git push --set-upstream origin development
git push

If all has gone well, your development README.md file is now changed and pushed into GitHub.  What about “master”, though?  Well, let’s take a look:

git checkout master cat README.md

If all has gone well, the contents of README.md are back to what was in your “master” branch.  By checking out “development”, it’ll change back to the new content there.  As a test, checkout the “production” branch, change the README.md file, commit it, set your upstream push target and then push the contents to GitHub.

Now you’re cooking with gas.

Conclusion

This is a simple tutorial to get you started with Git & GitHub.  There are MANY tutorials and books that can make you into a Git expert, but are way outside the scope of this humble little blog.  let me provide a few of those for you here:

[Git Help](http://git-scm.com/documentation)
[Git Book](http://git-scm.com/book)
[GitHub Help](https://help.github.com)

This documentation should be more than enough to get you moving and well underway with Git ins-and-outs for committing Puppet code and using r10k to interface with and distribute that code around your environment.

Why All the Vim?

| comments

As we move on through the last post on Vim, you may ask yourself why I’m remediating all the way back to text editors.  Well, as you’ll see over time, this is all about workflow; Building yourself a detailed workflow by which you can write code, syntax check, commit to revision control, deploy to your Puppet instances, and duplicate that workflow across all your environments.

The text editor itself, while important, is just a tiny part of a much larger picture I hope to cobble together over time.  So, let’s begin to push forward with our coverage of Vim.

Plugins and Syntax Highlighting

I went through the beginnings of Vim to give you a starting point and some basics in the event you have no experience in the Vim world.  One would assume that since you’re on a Puppet/Dev-Ops-y sort of page, all this is old news, but we do have completely “green” readers from time to time, and I didn’t want to leave them out.

The main goal in getting Vim in the picture was to bring you to the point where we start looking at our code and knowing what we’re dealing with at a glance.  As you begin to work in the field, whether coding in Perl, Python, Shell, or the Puppet DSL, there are some conventions out there all designed to help you and smooth your workflow.  Of these is syntax highlighting in code editors in general, but (for our purposes) in Vim specifically.

Take a look at this screen:

While the code is well formatted and everything seems ok, were there any issues in this document, you’d never know it.  From syntax issues to missing elements, none of this is automatically highlighted to you in any way.  Enter syntax highlighting…  Look again:

Much nicer, no?  Were there any missing elements, you’d see something amiss in the document.  The colors would not be organized according to element type, and odd things would be displayed in the page.  Let me “break” the file for you…

You’ll notice that on line 4 something is amiss.  If you compare the two colored instances, you’ll see that your eye is drawn to where things begin to be different.  Best part for quick and easy glancing is that the entirety of the file after that one mistake now looks “wrong”.  Ease of view.

How Can This Help With Puppet?

As luck would have it, Vim has a plugin engine that allows you to have pre-built templates that syntax highlight code for you in a predetermined way.  It “recognizes” your code type, and highlights accordingly.  The basic plugin structure for vim lives in your home directory and in the “hidden” .vim directory.  Under this directory you can have a number of wide and varied add ons to vim.  We’re just going to talk about plugins.

By default, you don’t have anything in this directory.  You usually have a .vimrc file and a .vim directory in your home directory location, but that’s about it.  The “magic” happens, though, when you add a few pieces.  Those would include a “.vimrc” file which will turn on syntax highlighting, and then a Puppet vim plugin that sorts all the language elements and colorizes them for you.

In your home directory, if it doesn’t already exist, create a .vimrc file with a single entry:

syntax on

This will instruct vim to syntax highlight, and to be aware of any highlighting plugins that may live in the .vim plugin folder.  Next, place this file into your .vim plugin directory.  If it does not yet exist, create it.  It lives in ~/.vim/plugin.  Once you load up new puppet manifests, it will recognize the type and begin to highlight the code according to the defined convention from the puppet.vim file you just downloaded.  If you have any issues, look at the vim plugin reference here.

Now you’re ready to work with Puppet files like a pro!

Vim

| comments

Old School

Why on earth am I starting with Vim?  (or “vi” for you old folks)

Vim is the modern “vi” implementation.  A full-screen text editor with a myriad of options and abilities for beyond anything I could ever cover here.  But Vim has one thing going for it that no other text editor has.  One simple fact about it puts it in the category of cameras.  You know the old saying?

“The best camera is the one you have with you.”

Thus it is with vi/vim.  It’s literally everywhere.

Every UNIX OS, commercial or not, streamlined or not, old or new, has vi or vim installed on it.  Emacs is a great product, but it just isn’t installed by default everywhere.  Regardless of the editor you refer to, ${INSERT_EDITOR_HERE} just isn’t as ubiquitous as Vi/Vim.

Since we’re talking about a modern pursuit and workflow (DEVOPS), we’ll be talking mostly about Vim’s capabilities and features.

What Vim is NOT

Vim is not a word processor.  You won’t be writing business letters with it.  Vim is not for writing resumes, making pretty newsletters, or for typesetting a magazine.  The die-hard Vi/Vim fan will tell you that you can do all of the above with it, but that falls into the same category as filling in the Grand Canyon with a teaspoon. You can do it, but why on earth would you want to?

What Vim Does Best

Vim edits text.  Plain…text.  Not pretty bold, italicized, with all sorts of alignment characters and strange paragraphs and pagination doohickeys… no, Vim just makes text files.  Text files that are SO devoid of bells and whistles, in fact, that when you open a Vim created file in your favorite WYSIWYG editor, you’ll see what appears to be a pile of letters and such all crammed together like you had nothing else on your keyboard but letters, numbers, and punctuation.

Vim allows you to eliminate all the cruft and get right down to the matter of creating plain, unencumbered text files.

Why Does it Matter?

When you’re logged onto your favorite Linux through two bastion hosts across several continents and have latency to boot, WordPad will not help.  You’ll need a lightweight text editor within which you can load, edit, and save the single most numerous type of item on a UNIX system… a text file.

Where Can I learn More About Vim?

Vim’s main project page can be found here online.  The main page has links to documentation and various community links as well as connections to various types of plugins and add ons you can use with Vim for any number of tasks.  You can join online forums, mailing lists, and communities whose entire purpose is the extension and promotion of Vim.  But that’s not what we’re up to…

How We Will Use Vim

For our purposes, we will use vim as code editor.  No more, no less.  Vim’s abilities can help us see our code in ways that let’s us know when there’s an issue and can direct us generally in the direction of our solutions.  So, let’s dive in to a minor Vim tutorial.  (If you’re an advanced Vim user, stick with me…)

All Linux distributions and Mac OSX come with Vim pre-installed and ready for action.  Note that some distributions of Linux will have both Vim and Vi.  You will either need to get into the habit of running “vim” from the command line, or setup your shell aliases to load Vim every time you type “vi” instead.

When you launch Vim, you see a screen much like the following.  I use Mac OSX, but the effect is the same, regardless of platform.

I have several features turned on (including the line across the bottom that provides me a lot of information about the file I’m editing), but the main things we will talk about here are syntax highlighting and plugins after a short tutorial on how Vim works.

I Can’t DO Anything!!!

Most people’s frustrations begin right on this page.  From here, nothing seems familiar.  I can’t pull down a menu and I can’t really even choose “exit” from a list of things to do.  The only real hint I have is on the screen above:

type  :q               to exit

…which is quite a peculiar directive.  Why do I have to type a colon?  What does it mean?  And goodness help you if you already managed to type something into the screen.  The instructions you see above disappear, and without the right collection of keystrokes, you’re not getting out of Vim.  You’ll most likely just close the window and start looking for “notepad”.

Here’s where the tutorial starts.

Vim is what is known as a full-screen text editor.  It started back in the Amiga days, and was first released publicly in 1991.  Before Vi/Vim, to create text files, there were line-based text editors that only allowed you to see one line of a file at a time.  So, you really couldn’t work with huge files…it would be difficult to see file lines compared to each other or look at the whole flow of a subroutine you had written, or even to match wording or syntax from one section to another.

Enter the full-screen editor.

What the full-screen editor did was open a file on disk, reserving space in memory to hold the entire file, and then display to you a “window” into your file equal to your terminal’s display size.  for instance, I have a terminal right now that has a several thousand line file open.  Of that file, I can only see 25 lines and 80 columns wide.  This “window” onto my file is something I personally configured in my terminal program (in my case, “iTerm”).  I can scroll up and down this file, sliding forward or backward within the file from the beginning to the end (just as you may an MS Word document) and can interact with/edit any character I can see on the screen.  (we’ll talk about search and replace and other such things later).

We are currently in what is known as “command mode”.  Many would ask why you don’t just call this “view” mode, and the reasons are very simple.  From this screen, you issue MANY commands to Vim and tell it how you want it to behave for you.  For our purposes, we will use “command” mode and “insert” mode mostly.

Well, How Do I Edit Something?

To edit your file, you enter into a mode known as “insert” mode.  From this page there are several different modes you can enter, but “insert” mode is the easiest.  You simply type a single lowercase “i” to enter this mode.  When you do so, a cursor appears on the top line of the page you are viewing, and you are now able to type all you like.  Letters, keys, tabs… all normal typing idioms are available to you from this point.  How, you may ask, then do you save your work to disk?  I’m in this “insert” mode and don’t know how to save!

Think about what you wish to accomplish… you wish to issue the command “save” to Vim.  Command…  as in… “command mode”, perhaps?  Well, we have to go back into command mode, then, so we can issue some commands to Vim and exit the program.

Any time you are in Vim, your “saving grace” is your [esc] key.  Two taps on the escape key lways brings you back to command mode from anywhere.  Go ahead and try it.

You’ll now notice you have returned to “command mode” just as you were when you first opened Vim.  The only difference is that everything you typed is now on your screen and has not gone away.  Your “edit buffer” is full of a file you now can do things with.  You can save it, delete it, save it out as a specific file name… a myriad of normal file operations you may be used to from other software packages.

In our case, we want to just save the file.  However, when we opened Vim, we didn’t specify a file name to edit, we just opened Vim.  So, for all intents and purposes, we have an open buffer full of “stuff” and no file name to associate with it.  What we want to do now is to “write” the file to disk.  To do so, we have to issue commands to Vim in Command mode.

Command Mode

To issue commands to Vim, we have to tell it we are issuing it a command, otherwise you may hit a letter than means something else.  Recall that simply by hitting a lowercase “i” we placed vim into insert mode and then by hitting the [esc] key twice, we left it.  Clearly, there’s more to this editor than we can readily see, so how do we save the file?

When you have a buffer with text you would like to save, you have to first hit a colon “:”.  You’ll notice that Vim places the colon on the bottom line of your screen to the left, awaiting a command.  While there are several commands we can perform here (as well as joining multiple commands together), we will simply write the file right now.  To do so, while at the “colon prompt”, we simply type:

**: w foo.txt**

and press the “enter” key.  You will receive a message on the last line that lets you know the file has been written to disk:

**"foo.txt" [New] 1L, 16C written**

But I’m still in Vim.  What do I do now?

Just like “w” is a command, exiting the program also is a command.  Guessably so, it is the letter “q” for “quit”.  So, as before, you hit the colon key, then the letter q, and then the enter key.  If all goes well, you’ll be back at the command line.

Much More to This

Were I to do this for all the features of Vim, I’d be writing a book.  However, fortunately for you, there are several tutorials and cheat sheets on Vi/Vim all over the Internet.  Here are a few of my favorites.

The Main Vim Tutorial Linux.com’s Vim Tutorial

As well as some cheat sheets for you to refer to for quick reference on the various commands available when using Vim:

One of My Favorites Another Good Cheat Sheet

Take some time learning the basics of Vim before pressing on to the next article: “Customizing Vim”.