Questy.org

A Few Notes…

| comments

I think it is important to make a note about all of the people and places that have helped me ferret out a lot of this configuration.  I don’t find this stuff in a vacuum (like anything in the Open Source world), and find it quite important to give a nod where a nod is due.

First, to Turner Broadcasting who gave me my first shot at doing this in a relatively small environment.  There was a ton of time spent on the phone, in online forums, reading documentation, and the like.  Everything sort of proceeded from there.  I tried to get such a thing into the backend over at The Weather Channel, but the climate wasn’t right, and there was a great deal of change going on.  Trying to implement such a thing there just wouldn’t have worked out very well.

Next, to Incomm.  They needed a solution, and Brock (my boss) gave me ample time to get a lot of this together, and to do it right without interruption.  Big thanks there.

Finally, and most importantly, my best bud John Beamon.  He encountered and worked with LDAP long before I did.  And, while we have considerably different directories in place between our two mutual organizations, his help in putting together how all this works in my head has been invaluable.  I daresay I’ve been into his IM client twice (or more) often as he’d like, gave him props nowhere near as much as I should (like at all), and even with all his own work on his plate, he has helped me out immeasurably.  Add to that, he’s a long-time friend and fellow-minister, and that’s just good times right there.  They don’t come along like John very often, and I’m grateful to have him as both a friend and fellow nerd.

Kudos, JB!

I’m sure there’s still some inaccuracies here, and I’ll be sifting through all this as I continue to build and extend OpenLDAP in this environment.  Things will automagically correct before your eyes, most likely.  If you find anything that happens to scream “No!” to you, feel free to drop me a line, and I’ll be happy to make that change.

HappyLDAPping!

LDAP Administration VI – TLS/SSL

| comments

_This article goes hand in hand with “LDAP Administration – Part I” in regards to configuring the client. _

So, let’s see where we are.  We have a master server you will be doing all administration work on.  This master server replicates to two hosts in the environment that serve LDAP queries to your clients.  These servers are replicants and are load-balanced under a VIP that is pointed to by the name you choose.  (in our case, ldap.bob.com).  You can change passwords at the client level, and have it pushed back up to master and replicated out to the environment immediately.

Finally, we need to talk about security.  There’s a number of ways to do security, but RedHat has done a lot of the footwork for you.  Unfortunately, it’s very poorly documented, and they really Really REALLY want you to use RedHat Directory Server for everything, so I don’t guess it’s a priority.

Essentially, we want to secure all queries floating around the network with TLS.  In a RedHat world, you simply need to make a couple changes at the server, restart LDAP, and then connect from TLS-enabled clients and all works just as it did before, except now it runs over an encrypted channel.

First Steps

RedHat has tried to ease the pain of generating certificates by placing all you need in a Makefile on-box.  navigate to /etc/pki/tls/certs and see that there is a makefile there.  Next, run:

make slapd.pem

to generate the needed files.  If it has already been done for you by the system, you will get the answer:

make: `slapd.pem’ is up to date.

If you get this message, you’re halfway there.

Next, edit the /etc/openldap/slapd.conf file.  You will need to refer to the appropriate files to allow for secure operation.  Insert the following lines into that file:

# TLS Security
TLSCACertificateFile /etc/pki/tls/certs/ca-bundle.crt
TLSCertificateFile /etc/pki/tls/certs/slapd.pem
TLSCertificateKeyFile /etc/pki/tls/certs/slapd.pem

Next, edit the file /etc/sysconfig/ldap.  Make the following lines:

SLAPD_LDAP=yes
SLAPD_LDAPS=no
SLAPD_LDAPI=no

look like:

SLAPD_LDAP=no
SLAPD_LDAPS=yes
SLAPD_LDAPI=no

Then, restart LDAP:  /sbin/service ldap restart. This does two things.  First, it tells the client where to look for the certificates, and then tells the system to only serve from the secure port 636.  (recall that we are on the replicants which are, in turn, servers themselves.  We have handled connecting to the master as well as setting the replicant up to receive queries)

Finally, we connect a client.

Connecting the Client

To allow a client to connect, you need the appropriate key on the client (public server key) to be able to exchange identities with the server, and establish the secure session.  To do this, you have to distribute this key you just made out to each client you wish to connect back to the server.

The key you will be distributing lives in /etc/pki/tls/certs and is named ca-bundle.crt.  Simply move this cert to your client (I use scp for such an operation) and place it into your openldap cacerts directory like so:

scp -rp ca-bundle.crt host.bob.com:/etc/openldap/cacerts

If you don’t have rights to copy straight into the destination, send it to your home directory, then just move the cert there using “sudo”.

Finally, you need to tell the system about the cert.  This is done in /etc/openldap/ldap.conf via three lines that tell the system how to connect, and where the cert lives:

TLS_CACERTDIR /etc/openldap/cacerts
TLS_CACERT /etc/openldap/cacerts/ca-bundle.crt
TLS_REQCERT allow

Next, we run the RedHat authentication gui tool (curses) called authconfig-tui. _**(Note: last time I had you hand-edit the /etc/openldap/ldap.conf file, and this is entirely still possible.  I am endeavoring to show you that there is a tool to do this work, should you desire to use it.  If not, simply add the above lines and change the URI to the one below, making sure /etc/nsswitch.conf is configured correctly, and you should be good to go.)

In the left column, select “Use LDAP” and in the right column “Use LDAP Authentication”.  Tab down to the “Next” button and press “Enter”.

As misleading as “Use TLS” may be, do not select it.  🙂  Instead, go down to your server line, and modify it like so:

ldaps://ldap.bob.com:636

Your base DN should already be filled out (in our case: dc=bob,dc=com).  Navigate to the “OK” button, and press “Enter”.

This should conclude your client configuration.  Now, you should be able to run a query against LDAP, and the whole path be secure:

id bob uid=123(bob) gid=123(users) groups=123(users),456(bob)

Conclusion

I’m sure I’ve missed or glossed over something highly important.  I am in the process of discovery on this particular topic, and this article is serving as my documentation store until I can get the whole thing cleaned up & finalized to push back into my work environment as official documentation.  I’ll correct here as I find mistakes and omissions.

LDAP Administration V – Replication

| comments

Continuing our discussion of LDAP Administration, there’s the matter of Replication.

So far we’ve created an LDAP store, turned up the server, configured a client, and even connected Apache authentication to it.  However, if we’re going to use our LDAP server for enterprise authentication, then there’s the small matter of “What happens when my authentication server wets the bed?”.

As with anything in the enterprise, you have backup systems.  Sometimes they’re failover systems, sometimes clusters.  Sometimes they’re tandem systems, and sometimes they’re load-balanced.  No matter the configuration, you have redundancy, resiliency, and scalability.  I plan to talk about one of the many scenarios available to LDAP administrators today; the idea of a master server and many replicants.

Layout

In my configuration, I have a single administrative parent.  This system is where we do all administrative level work.  This includes adding users, adding groups, reporting, and the like.  It is also the “provider” store to all replicants in our environment.  We learned earlier how to turn up a server that is queried directly.  Now let’s learn, instead, how to configure this system to replicate itself.

Assume 3 systems total, ldap01.bob.com, ldap02.bob.com, and ldap03.bob.com.  ldap01.bob.com is our master server and our replicants are ldap02 & ldap03.  To tell the system it will be replicating, you will need to configure it to do so.  Shut down LDAP on the primary like so:

/sbin/service ldap stop

This shuts down all daemons and associated processes.  Next, we need to edit our /etc/openldap/slapd.conf to include information regarding where our replicants will be.    You must add a few lines to the master to make this happen.  Like so:

replogfile      /var/log/ldap/slapd.replog

replica uri=ldap://ldap02.bob.com:389 binddn=“cn=admin,dc=bob,dc=com” bindmethod=simple credentials=secret

replica uri=ldap://ldap03.bob.com:389 binddn=“cn=admin,dc=bob,dc=com” bindmethod=simple credentials=secret

This can be added at the end of the file.

Next, we take our fresh two servers, and turn up a similar system to what ldap01 was before adding the above lines.  In these systems, there are only two important lines to tell them they are replicants and not masters.  They are as follows:

updatedn “cn=admin,dc=bob,dc=com” updateref ldap://ldap01.bob.com

That is literally the entire configuration.

Populating the Replicants

To have your schema transferred over, and to be working from the same general starting point, I find it important to copy your whole database over to start with.  This is easily done utilizing standard LDAP tools.

First, start back up your master server:

/sbin/service ldap start

Once you’ve done this, the database is up and ready for queries.  We will essentially dump our database for import on each of the replicants.  To do this, we will use the slapcat utility, redirecting the output to a file we can use to move around to the replicants.  Run slapcat as follows:

slapcat >> master.ldif

this will output the contents of your LDAP store to a single LDIF-formatted file, suitable for import into other servers.  Simply copy this file to a generic location (such as your personal home directory) on each of the other servers, and we are set for import.

Once your file is in the new location, you’re ready to import.  First, start LDAP as outlined above.  Next, add the LDIF to your store:

slapadd -l master.ldif

Probably unnecessary, but I usually restart my ldap server after the import, and now I’m ready to go.  Repeat the process on your third LDAP store, and your full environment is running.

Next Steps

So let’s see where we are.

Master server up and serving.. check. Two slaves configured as replicants, up and running.. check.

Now that you have your stores up, you have to do some testing.  Primarily, that the master replicates to the slaves.  The way I usually do this is use the Apache Directory StudioI covered in an earlier article.  I simply add a user on the master.  Then, I connect to each of the slaves in turn to see that the user has appeared there.  If so, then we’re ready for the next steps:  High Availability.

You have two query hosts that can equally provide query answers from remote clients.  There are several ways you can make these available.  Round-robin DNS, HA IP failover, and load-balancing via a hardware load balancer.  I prefer the latter.  However, to do so, you need a way to tell the load balancer that your LDAP store is up and responding.

I prefer to use a small script on the system that can be served up via HTTP to the load balancer that does a simple operation.  First, it does an LDAP search, looks for information, and then prints out to the web page it creates a simple “UP” or “DOWN” message for the load balancer to key on.  The script looks like the following:

As you can see, all we do is simply do an ldapsearch against our bob.com domain, look for the home directory for the admin user to look like “/home/admin”.  If the answer returns, we say “UP”, if not, we say “DOWN”.

Place this script into your “cgi-bin” directory, make it executable (chmod 0755 ) and simply call it in your browser via the URL:  http://yoursite.com/cgi-bin/.  If you have Apache properly configured (outside the scope of this document) to serve CGI Executables, you should get the status of the individual system.  Do this for both your replicants.

Finally, ask your network team to configure these two systems in a load-balanced configuration behind a VIP (virtual IP).  Have a sensible DNS name pointed at this IP (ldap.bob.com, for instance) and you’re in business.  Now, when you configure your clients to authenticate against LDAP (Article #1 in this series), you just point them at the ldap.bob.com name.  If either of the systems go out, the load balancer will point you to the machine that is up to serve your requests.

Conclusion

I hope this gives you a basic direction to go in getting high-availability setup for your system through a combination of replication and load balancing.  There are other methods for HA in the replicants.  Perhaps we will cover that soon.

Next up:  Securing your LDAP installation.

Linux Docs Docked?

| comments

In my Internet and sysadmin travels, I find it necessary from time to time to seek out documentation on a particular subject in the Linux world.  Sometimes I need something specific to Linux, sometimes a package on Linux, but always technical and many times detailed.

For the longest time (since my Linux infancy in 1995), I have used the Linux documentation project to find important HOWTOs.  As a community, we have prided ourselves on a ubiquity of documentation and support, but I found something both interesting and alarming at the same time on my last visit to TLDP.

In 2010, there have only been 12 documents of TLDP modified and in 2009 a mere 5.  That means that in nearly two full calendar years, there have only been 17 documents modified in the project.  As many of you already know, 24 months is an eternity in Internet time (much less regarding the growth and progress of a major OS like Linux)

While I am confident that many of these docs are solid and still stand on their own, it does concern me that documentation hasn’t changed much and other projects’ documentation stores are starting to see similar atrophy.

Certainly, with each release we find a new set of release notes and subsequent additions to documentation.  However, as we go on I find more often than not that a project will have a fundamental change that should really be covered in TLDP or at least documented at the documentation level (rather than the release note level) and simply never has it happen. In these cases, you find a “collective knowledge” of people on the support mailing list that “just know” something to be true, but those new to the project or the list may never know it because it isn’t written down somewhere.

Remember the age-old wisdom:  “If it isn’t written down, it never happened.”

For those of you on a project, please consider your documentation.  It may be a time for a rewrite.  It happens precious little (if the TLDP is to be believed) and really needs your attention.  There are countless volunteers out there who may not be coders, but use your product and would be ecstatic to be counted as one of the team simply to do documentation for you.

It’s All About the Framework

| comments

System Administration is a funny thing.

Many people think that it’s just adding users, performing requests, and working with project groups to get their “next big thing” out the door.

Certainly, much of your work involves those very things, but what younger admins never quite realize until they’ve built a beast that cannot be fed is that everything is about frameworks that tie everything together.  From a system load framework to an authentication framework to an asset management framework, if built correctly a framework can save each and every admin countless hours of administration time.  How?

Automation

Take distribution.  You can push files around considerably more easily when you’ve built a framework to do so rather than every admin having an individual way to do it.  In fact, when properly implemented, distribution of a file to “gobs” of similar hosts (that’s a technical term there :)) can be as simple as:

distrib 

Now, many will say that rsync or scp will do much of this for you, and that is correct.  However, in the context of your individual site, having symbolic abstractions such as “class of host” goes a long way.  Perhaps a certain file needs to be distributed to only your web servers.  Or maybe, only web servers running RedHat 5.3.  If you correctly build a framework for file shove and management, suddenly heavy lifting becomes a light chore.  After all, the more pulleys in the works, the lighter the load becomes.

Authentication

As has been covered on this site in the past, LDAP is a wonderful authentication framework that can be tied to positively everything in your environment as well.  From Apache authentication against the store to UNIX authentication, to various types of applications understanding LDAP as a target for authentication sources, much pain of user administration can be solved by having a centralized authentication mechanism.

Frameworks as Philosophy

Rather than continue with examples on a case-by-case basis, consider the entire concept of frameworks and unifying ties across systems and networks.  Many places I’ve worked, organic growth brought about massive numbers of machines that were “siloed” one from another either by project boundaries, function boundaries, or other superimposed logical delineations we as users imposed on them.

Logical Differentiation and Service-Orientation

The clear winner over the “as needed” or “for a purpose” way of doing things is the “tiered” or “services-oriented” model of work.  Rather than many groups of things that form a farm of servers, you have the farm of servers that service many different things.  This, I know, sounds something like technological double-speak, but let me explain what I mean.

In a normal environment, we would have a somewhat typical scenario with front-end servers, back-end infrastructure, management, and organization.  The problem we have with this is that if “New Whiz-Bang project #7” comes along, new hardware will need to be procured for each piece of the puzzle.  New app servers, new web servers, new database components, maybe management and authentication considerations… Each project, every time, capital outlay, budgetary justifications, etc.

If, instead, you think in terms of frameworks, your job becomes one of determining total resources needed across an environment rather than resources needed for an individual project, and thus individual project growth concerns, individual project funding concerns, personnel, etc.  “Siloed” growth and expansion may not bite you today or tomorrow, but as I said early on, will become a beast that cannot be fed.

Consider instead, the following example:

Instead of “theses web servers versus those web servers”, you instead have “the web layer”.  Any and all requests to your company come through “the web layer”.  It is scaled as an entity and not as individual projects.  When scaling happens, all environments benefit.

Instead of “these application servers” versus “those application servers”, you have “The App Layer”.   A single applications framework that serves all application server requests back out the front-end web layer by leveraging container and web server features to do the “effing magic ™” on the backend to provide a unified front-facing experience to the user.

Extrapolate these ideas… Instead of the app layer, let’s say the app cluster.  Now the power behind this idea becomes clear.  Unlimited scalability with unlimited potential.  How about “the database cluster”?  Regardless of the solution you use, if there is a single database resource (cluster, replication ladder, whatever) that serves back queries you throw at it, how much better is that than “databases for this” and “databases for that”?

Take it a step further.. make an XML services layer that serves out “your data” in a clearly defined API sort of way, and all you do is make XML requests to a services infrastructure rather than directly at your databases.  Or, your cluster is comprised of “write databases” versus “read databases” and you’re segmenting the type of traffic you’re serving to reading versus writing, making the read operations light-years faster.

Authentication layers, web layers, XML layers, app layers, database layers… All frameworks that grow as an organism rather than series of unrelated growths.

When going through your next design adjustment or your next expansion or data center rollout, consider thinking differently about growth and planning.  I believe that if you think in terms of large organisms with several related parts rather than several growths unrelated except in their location, you’ll build into your infrastructure a power and a scalability that will serve you well for many years (and growths!) to come.

Puh-lease

| comments

I saw a p1mp of a new book on Gruber called “Being Geek”.  It bills itself as “The Software Developer’s Career Handbook”.  That sort of set me off, honestly.

You mean the people we geeks won’t give access to because, if left to themselves, developers will patently destroy anything they come in contact with in the systems world?  You mean the people who think root is an account that should be used as a tool to cure ALL their ills and knock down all the “obstacles” they encounter?  The people who won’t use “sudo” because it’s too many characters to type and “breaks their flow” when coding?

Oh, I get it, the people who haven’t the slightest clue what it really means to be geek.  To give honor and deference to the system.  It’s security, design, integrity.  They don’t care that there are other people on the box, they just want to meet their date.  And they’ll twist every systems admin in every possible contortion to break all the best practices in the world just to meet their date.  “Being Geek”  Phah!

These guys like to be called geeks because it is an easy to earn, undeserved moniker for them bestowed by people who have no clue what it means.  All the while, they’re breaking every rule and every guideline just to meet a date.  Further, when Systems people point out security concerns or elements of systems design these supposed geeks are transgressing, they run to upper management and complain that the systems teams are “blocking their date”, or “They’re blockers”, or “we can’t get anything done.”  Geeks.

A real geek would NEVER do that.

A real geek would write beautiful code that followed all the best practices rules for the honor of having written it.  A real geek would NEVER even begin to consider using the root account unless it was absolutely necessary.  A real geek would take the recommendations of a systems team (the real geeks, by the way) who spend all their time making sure the platform upon which these geek posers perform their witchcraft is ALWAYS up, ALWAYS stable, ALWAYS up to date, and ALWAYS secure.

I’ve been in this business for about 20 years now, and in that time I have met two developers who were tried and true, died-in-the-wool geeks.  TWO.

Gimme a break… “Geeks”.

I May Regret This

| comments

I’ve been asked by the Digg auto-submission system to paste an invisible key into my page so Digg can curate my posts.Into my next story on my site.  Of course, if ANYthing on here catches on, my server is toast, but let’s see what happens, shall we?  🙂

Production and Developers

| comments

Somewhere in the back recesses of my mind, I was brewing a post about permissioning and the relationship between DEV and Sysadmin teams and when and how elevated access should be granted.

Well, Kyle over at serverfault.com has approached this topic in a post entitled “Should Developers Have Access to Production?”.  I highly recommend this read, as it covers many of the topics I wished to cover.  I do have some additional commentary, but I’ll circle back around to that and share my thoughts at a later time.

Community or One-Upmanship?

| comments

Oftentimes I peruse various support forums and IRC channels.  Not for support, and not always to help out, but to just observe the community in action.

I have occasion to see the best of what Linux is all about (as it pertains to community), people literally all over the world at all levels of development asking for and giving help on a wide array of issues with their systems from very basic unix navigation and installation issues to extremely complicated configuration issues that require no less than code modification and custom compilation of new device drivers.

In and among this crucible of collaboration I find an almost pernicious insinuation to the conversation.  Whether it be because of youth, undeveloped social skills,  or some other societal construct, you begin to find those who are there for the sole purpose of detracting from the conversation.  Taking pot-shots and newcomers, ridiculing marginally experienced recent-converts and even getting into heavy argumentation regarding matters of practice in administration and or management of Linux systems.

For whatever reason, these find it a jolly good time to detract from the conversation; to make themselves present for little more than obstruction.  This can (and does) have a chilling effect on the community.

First, the open nature of the community is compromised.  Newcomers experience a chilling effect in not wishing to ask questions and interact for fear of being ridiculed and condescended toward.  Next, it generates a conversation away from these support channels about the character of those in the community, that they are elitist and snobbish in their assistance, and they wish not to help, but to lord over you their level of attainment.  Many times these conversations end with the phrase “I’ll just stick with Windows, I guess.”

Too bad for the community.

Widespread adoption of our beloved OS will not see any significant amount of growth until we move beyond this practice.  In light of this situation, I have some suggestions for community participants to both help them obtain the sense of community they wish and still allowing the newcomer to have a good experience, thus enjoying Linux early on, and then in turn propagating our community to his circle of friends.

First, if you must hang around Linux support forums, be prepared to answer any and all questions no matter how dumb you personally believe them to be. Remember that we all started at ground zero at some point in our lives.  We had those who helped us, and we had to pore through tons of documentation to get up to speed.  Show some understanding and compassion for the newcomer, and help them as much as you can.  If you do this regularly, before you know it, those “newbs” are now helping others only slightly newer than themselves with the very issues they themselves have just dealt with.

This does two things.  First, it brings them much higher in the food chain, making them closer to your level and much easier to relate to.  Next, it places yet another level of support for newcomers where you used to help out.  This is an optimal situation.  It broadens the community, “pays it forward” through newcomers to even newer adherents, and then builds up the ecosystem.

Next, if you don’t have anything to offer, say, or help with, it is perfectly ok to shut up.  If you see a question that you really feel is stupid and you are running over the litany of wonderfully snarky things you can say, maybe it is just best that you not say anything at all.  I firmly believe this benefits everyone.  If you must type something, private message an online friend instead so you can have a private laugh together.

Finally, when you do have things to offer, remember that as users we have preconceived notions about what everyone “should just know”.  The fact of the matter is that everyone does’t “just know” those things, and will never learn them if they’re the butt of finely crafted barbs heading their way.  Be prepared to remediate in a very friendly, non-condescending way and be prepared to even go as far as “Ok, a shell interprets the commands you type on the screen.”  Or even (God forbid) “your keyboard and/or mouse is sometimes referred to as input whereas your screen, network card, or modem is considered output.”

Painful, I know, but the more friendly, welcoming, and helpful you are, you play a part in helping to build up the community in a way that is self-perpetuating.  Help out.  Be friendly.  Be patient.  With a little help and a little patience, you build a group of community members that one day will be doing the very same thing for another generation of users.