Author Archives: jasonboeshart

Azure ARM Policy to Block Public IPs

Azure ARM policies are a great way to put limits around your Azure subscription or resource groups, and one of the cool things you can do is prevent specific types of resource creation. Public IP addresses are created by default when you create a new IaaS virtual machine. This may be OK in some instances, but what if you want to prevent these from being created across the board? The following policy will prevent virtual machine creation if a public IP address is assigned, and will also prevent public IP address object creation if you are trying to add a public IP to a VM. The only scenario it won’t prevent is the attachment of an existing public IP to a virtual machine.

{
  "if": {
    "anyOf": [
      {
        "source": "action",
        "like": "Microsoft.Network/publicIPAddresses/*"
      }
    ]
  },
  "then": {
    "effect": "deny"
  }
}

Here’s some Powershell you can use to create this policy. Note that the policy definition is inline here, you could also put this in a .json file and reference it by path when creating the policy definition. This script will create the policy and assign it to a resource group you specify. Replace everything in with parameters specific to your environment.

# Subscription selection
Login-AzureRmAccount
$sub = "<subscription name>"
Get-AzureRmSubscription -SubscriptionName $sub | Set-AzureRmContext

# Get the resource group
$rgname = "<resource group name>"
$rg = Get-AzureRmResourceGroup -Name $rgname

# Create the policy definition
$definition = '{"if":{"anyOf":[{"source":"action","like":"Microsoft.Network/publicIPAddresses/*"}]},"then":{"effect":"deny"}}'
$policydef = New-AzureRmPolicyDefinition -Name NoPubIPPolicyDefinition -Description 'No public IP addresses allowed' -Policy $definition

# Assign the policy
New-AzureRmPolicyAssignment -Name NoPublicIPPolicyAssignment -PolicyDefinition $policydef -Scope $rg.ResourceId
Advertisements

Automating SQL Server With Chef

Chef has really made great strides recently to enhance the support and capabilities of managing Windows hosts. I’ve recently been working on automation of SQL Server with Chef, and wanted to share details in this post on how you can run TSQL against a SQL Server instance.

There are a couple of ways to run TSQL against a server. One is to use the database cookbook that makes a few Lightweight Resource Providers (LWRP) available to manage databases, and one of those is for SQL Server. The other way is to use either the execute resource that calls sqlcmd.exe or the powershell_script resource and use the invoke-sqlcmd Powershell cmdlet. The thing to keep in mind is that the database LWRP doesn’t play well with TSQL with multiple batches (using the GO keyword), so if this is the case in your situation you’re probably better off with the execute or powershell_script resources. Just a decision point to keep in mind.

So let’s cover the execute resource method of running a TSQL script. In this case it’s a database maintenance script (based on the awesome Ola Hallengren scripts) that set up some standard maintenance activities. This is a script that I drop on the server with a Chef template resource. In this case there’s a couple variables that come in to play to set some values in the script itself.

template 'C:\DBScripts\DatabaseMaintenance.sql' do
  source 'DatabaseMaintenance.sql.erb'
  variables(
    :backup_path => 'C:\db_backups'
    :retention_hours => '168'
  )
end

Now I want to actually execute the script. First, I make sure that the sqlps module is imported. This is needed in the guard for the execute resource and ensures that the invoke-sqlcmd cmdlet is present on the system. In my specific case it was much easier to get the return values from the invoke-sqlcmd command than trying to get it through sqlcmd.exe.

powershell_script 'sqlps module' do
  code 'Import-Module "sqlps" -DisableNameChecking'
end

Next comes the actual execute resource. I wasn’t able to get the full functionality of the script working when using the powershell_script resource in my case so I had to mix the execute resource with the powershell_script guard. I’ll be circling back to see if I can get it working fully in the powershell_script resource so that it’s a bit cleaner, but this works for now. Here I run the script through sqlcmd.exe, but not_if the number of jobs that start with DBMaint (a naming standard I follow) is not equal to 11. If there are 11 jobs we know the jobs are there and we’re good. If there aren’t 11 jobs, we need to run the script. The code in the not_if guard will either return true or false depending on the results of the query. This brings a bit of idempotence to the setup of these jobs, ensuring that we only do something when something actually needs to be done. It’s not perfect, but is a good starting point.

execute 'setup db-maint jobs' do
  command "sqlcmd -S localhost -i \"C:\\DBScripts\\DatabaseMaintenance.sql\""
  guard_interpreter :powershell_script
  not_if "(invoke-sqlcmd -ServerInstance \"localhost\" -Query \"select count(*) from msdb.dbo.sysjobs where name like 'DBMaint%'\").Column1 -eq 11"
end

Putting it all together in one recipe:

template 'C:\DBScripts\DatabaseMaintenance.sql' do
  source 'DatabaseMaintenance.sql.erb'
  variables(
    :backup_path => 'C:\db_backups'
    :retention_hours => '168'
  )
end

powershell_script 'sqlps module' do
  code 'Import-Module "sqlps" -DisableNameChecking'
end

execute 'setup db-maint jobs' do
  command "sqlcmd -S localhost -i \"C:\\DBScripts\\DatabaseMaintenance.sql\""
  guard_interpreter :powershell_script
  not_if "(invoke-sqlcmd -ServerInstance \"localhost\" -Query \"select count(*) from msdb.dbo.sysjobs where name like 'DBMaint%'\").Column1 -eq 11"
end

This is a quick and easy way to execute TSQL against an instance using standard utilities (or cmdlets) that we use outside of Chef, yet still allows for idempotence so that we only do something when something needs to be done, and ensuring that that server is in the same desired state after every chef-client run.

AWS KMS Encryption/Decryption Script

As a follow up to my blog post on Keeping Secrets in Chef with AWS Key Managment Service I wanted to post an updated script that can be used to encrypt/decrypt sensitive information. I’ve updated the following script to allow for a few parameters. Specifically:

-e --encrypt STRING (encrypt the specified string)
-d --decrypt STRING (decrypt the specified string)
-k --key KEY (full ARN or Key ID to be used to encrypt/decrypt)
-r --region REGION (region the key is located in)

You can find the script in my GitHub repo here, feel free to use it to encrypt/decrypt your sensitive information.

Refresh File Type Icons on OSX

Have you ever changed a file type association on Mac and the icon still annoyingly shows the icon for the old application instead of the new one? Run the following commands to refresh the icon cache, which should pick up the new icon.

sudo rm -rf /Library/Caches/com.apple.iconservices.store
sudo killall Finder

Keeping Secrets in Chef with AWS Key Management Service

Handling sensitive data in Chef can be a bit of a challenge. You can use encrypted data bags, but that can be trickty if you want a new node to access an existing encrypted data bag as you have to re-encrypt the data bag after the node has been bootstrapped. You can also use an external method of encryption/decryption (OpenSSL), but have to handle security around the keys themselves. Enter AWS Key Management Service. KMS is a service that you can use to store keys for encryption/decryption in AWS (EBS volume encryption, for instance) and can also be used as a sort of “Encryption as a Service”. I’ll show you how to do encryption of Chef secrets using KMS and a little Ruby. This works best if you’re Cheffing servers that will be running within AWS (as you can use IAM roles to provide greater security) but is not exclusive to servers running in AWS, you can use it anywhere. Here’s how it works.

At a high level we’ll do the following:

  1. Create a new KMS key
  2. Run Ruby script to encrypt some text
  3. Create a new server in EC2 with an IAM role allowing access to the KMS key
  4. Use the encrypted string in a Chef recipe

First off you’ll need a KMS key. As with many things in AWS there is a cost associated with this service. It’s $1.00 per key per month, plus some small charges for key usage. Check out the KMS pricing page for more details. KMS can be found as an option within IAM, it doesn’t have it’s own entry on the master AWS services list.

AWS Security & Identity

Click on Identity and Access Management and on the right side you’ll see an Encryption Keys link, click that and you’ll be taken to your KMS keys. There may already be a few there for usage of services within AWS, so leave those alone. You’ll need to create a new key for encryption/decryption within Chef. Note that once a key is created, it cannot be deleted, only disabled. You aren’t charged for disabled keys or for keys used created and used by AWS services themselves. Click the Create Key button to create a new key. This will take you through a few screens. Give it a name and a description, grant permissions to users/roles to administer the key, grant permissions to users/roles to use the key, and then create your new key. If you forget to add a user/role to either of those screens, you can always change that after key creation. Click on the key name after creation and you can see details on the key, and modify its options if you need.

So now you have a key, great! But how do we use it? Let’s take a look at some Ruby that can be used to encrypt/decrypt within KMS:

require 'aws-sdk-core'

key_id = 'arn:aws:kms:us-east-1:012345678901:key/01abc2d3-4e56-78f9-g01h-23ij45klm6n6'
kms = Aws::KMS::Client.new(region:'us-east-1')

# Get text from user
puts "Please enter the text you want to encrypt"
text = gets.chomp

# Encrypt entered text
encrypted = kms.encrypt({
key_id: key_id,
plaintext: text
})

# Display raw encrypted text
puts "Encrypted text raw:"
puts encrypted.ciphertext_blob
puts

# Display Base64 encoded text
puts "Encrypted text Base64 encoded:"
puts Base64.encode64(encrypted.ciphertext_blob)
puts

# Display Base64 strict encoded text
puts "Encrypted text Base64 strict encoded:"
puts Base64.strict_encode64(encrypted.ciphertext_blob)
puts

# Decrypt the encrypted text
puts "Now lets decrypt that"
decrypted = kms.decrypt({
ciphertext_blob: encrypted.ciphertext_blob
})

# Display the decrypted text
puts "Here's the decrypted text:"
puts decrypted.plaintext

Open your editor of choice and paste this code in. Replace the key_id with the full ARN of the key you want to use, and ensure the region in the kms variable is set properly as well. This also assumes that you have your AWS keys set up locally with an account that has access to the KMS key for access through the API. Save this file as kms-test.rb and run it from the command line with the command ruby kms-test.rb. You should see something like the following:
kms output

Note the difference between the raw output, the Base64 and Base64 strict output. The Base64 strict encoded text is what we’ll want to use in Chef since it’s one long string with no carriage returns. This allows for easy storage in a Chef attribute, handy if we want to store a password for an account for example.

So how do we use this in Chef? Let’s say you have a recipe that you want to decrypt the password for use by a command.

First, you’re going to need an IAM role associated with your node. When you build a new node in AWS, ensure that it has the following IAM permissions (replace the ARN with the ARN of your KMS key):

{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Principal": "principal",
    "Action": [
    "kms:Encrypt",
    "kms:Decrypt",
    "kms:ReEncrypt",
    "kms:GenerateDataKey*",
    "kms:DescribeKey"
    ],
    "Resource": "arn:aws:kms:us-east-1:012345678901:key/01abc2d3-4e56-78f9-g01h-23ij45klm6n6" 
  }
}

Now, if you store the Base64 strict encoded string (which you can get via the kms-test.rb script) in a node['my-cookbook']['password'] attribute, you can use the following Ruby in your recipe to access it and decrypt it from your new node.

pw = node['my-cookbook']['password']
key_id = node['my-cookbook']['kms_key']
kms = Aws::KMS::Client.new(region:'us-east-1')
pw = kms.decrypt({
     ciphertext_blob: Base64.strict_decode64(node['my-cookbook']['password'])
     }).plaintext

execute 'use-password' do
command "command that uses #{pw}"
sensitive true
end

This can be used without an IAM role (for instance if you want to use it outside of AWS) by updating the KMS object in the recipe with the following:

kms = Aws::KMS::Client.new(
        region:'us-east-1',
        access_key_id: node['my-cookbook']['aws_access_key_id'],
        secret_access_key: node['my-cookbook']['aws_secret_access_key']
        )

That said, storing keys locally on any system is much less secure than using IAM roles, so make sure you understand the risks and implications in doing so.

Using these techniques, you can easily store and retrieve sensitive data via KMS for use in Chef and whip up some more awesome!

AWS Certified Solutions Architect – Associate Exam Review

A couple of weeks ago I sat the Amazon Certified Solutions Architect – Associate exam and passed (woohoo!). I wanted to provide a bit of an overview of the exam itself and what I did to prepare.

The exam itself was 60 questions, with 80 minutes to answer all of them. I think i had 25 minutes or so when all was said and done, so it was ample enough time for me. All questions are multiple choice; no simulations were involved though there was some fill in the blank style questions with multiple choice answers. As you go through the exam you have the opportunity to mark each question for review, and can then review any question once you have answered all of them. I’d suggest taking advantage of this; I would mark questions that I was uncertain about, using context from other questions or a bit more time to think about it at the end. I think I marked 6 questions or so, and am pretty sure that I answered most of them correctly after changing my answers during the review period. Regardless, it’s a good practice to mark those that you’re unsure about so you can ponder on them a bit more as time allows at the end, but leaving an answer if you happen to not have that extra time buffer at the end.

As far as the questions themselves, per the testing agreement I cannot disclose any detailed specifics, but can give you some high level thoughts on what was covered. First and foremost, having solid experience with AWS will go a long way on this exam. Strong knowledge of EC2, VPC, S3 and IAM is good to have. The exam blueprint is a good place to start, but you’ll have much more success if you spend time in the console, building environments and playing around. Take the time to get very familiar with all the concepts in the blueprint and that will go a long way. Specifics on pricing for services were not covered in the exam, so you don’t need to know things like how much you’ll get charged per GB on ingress network traffic per VPC.

As far as preparation, along with hands on experience I took the AWS Certified Solutions Architect – Associate 2015 course on Udemy [1]. It’s a great course, starts out a bit slow if you’ve had any experience on AWS, but does a very good job of covering all the concepts you’d need to know in preparation for the exam. Ryan Kroonenburg is the instructor, and he’s got a Professional course that should be coming soon that I’m looking forward to. If you don’t have experience with AWS, get a free account and start playing around, you’ll need it if you take Ryan’s course.

All in all it wasn’t terribly difficult, but I’m glad that I took the time to prepare, as I’ve no doubt it contributed to my success. If you’re taking it, good luck!

[1] https://www.udemy.com/aws-certified-solutions-architect-associate-2015/

CentOS 6.5 & Amazon Linux Active Directory Authentication in AWS

Update 8/13/2015
Samba 4.1 has now been made available in the AWS Amazon Linux repo (yay!). This now allows you to join Amazon Linux instances to your AD as well.

Linux can be configured to authenticate against an Active Directory domain, providing centralized access control and the ability to use a single account to administer Windows and Linux hosts, as well as reducing the number of users directly logging on as root. There are many blogs and guides online that detail steps to configure this, but depending on your flavor of Linux, version and wind direction they may or may not work. What follows is documentation of a successful implementation within an AWS environment. Once complete, authentication via SSH is permitted for local and AD users, and sudo permissions are also granted for required elevation.

This setup uses winbind to join the domain and provide authentication via Kerberos.

Prerequisites and Assumptions

Originally I tried setting this up on an Amazon Linux AMI, which I’d hoped would work since it’s a RHEL based Linux distribution, but I ran into a few issues. For some reason, Amazon doesn’t provide the Samba package in their Yum repo, so I downloaded the samba binaries and compiled from source. Even after doing a compile and install, I still wasn’t able to get it to work. Bottom line is that the steps outlined here are for CentOS 6.5 (and presumably RHEL 6.5). At some point I’d like to retry Amazon Linux, as well as validating CentOS 7 (which given the underlying changes in the OS may be an entirely different setup process).

This has been validated to work on Centos 6.x and Amazon Linux.

You’ll need a Windows Active Directory environment. This post in no way outlines that setup, but the version tested against in this scenario was AD running at a 2012 R2 functional level. You’ll want a group set up that you can add users to for authorization purposes. In this case I used two groups, Domain Admins and an environment specific group (Linux Admins).

Your server should be pointed to the domain controllers for DNS resolution. In my case this was already provided through a DHCP option set in AWS. This can also be configured manually by updating /etc/resolv.conf with your DNS server entries. Example of mine in this case:

[root@adtest1 ~]$ cat /etc/resolv.conf
; generated by /sbin/dhclient-script
search addomain.local
nameserver 10.145.97.32
nameserver 10.145.99.246

If you’re set up in AWS, ensure that you’ve got a security group configured with all the necessary ports for AD and DNS communication allowed from the domain members, and apply it to your host. If you’re not in AWS ensure that you’ve got these opened. There are a number of places online that you can find the requirements, so I’m not going to list those out here.

In this example we’re using the following information:
Member server – adtest1
Member server IP – 10.145.97.224
AD Domain – addomain.local
AD Domain Shortname – addomain
DC1 – 10.145.97.32
DC2 – 10.145.99.246

Implementation Walkthrough

Build Server and Logon

Provision your CentOS 6.5 server and log on to it as root. I used the CentOS 6 AMI from the AWS Marketplace (https://aws.amazon.com/marketplace/pp/B00A6KUVBW) as I wanted my root volume on EBS. You’ll use a keyfile for SSH authentication for root, then once everything is set up you’ll authenticate with a password for all your AD accounts.

Install Required Packages

Install all the necessary dependencies via yum. These are as follows, with a brief explanation of what they do and why they are needed. There are other dependencies that these will install, but this is the top level of what you’ll need.
samba-client – Provides SMB/CIFS communication functionality and provides the winbindd service to handle AD communication.
krb5-workstation – Provides the base Kerberos functionality required for Kerberos authentication.
samba-winbind-krb5-locator – Provides KDC resolution for winbind.
authconfig – Installs a command line tool (authconfig) to update the appropriate configuration files required for AD authentication. Easier than editing all the config files manually.
pam_krb5 – Provides a Kerberos module for PAM.
I also install bind-utils if they aren’t already, so that you’ve got DNS troubleshooting tools (nslookup, dig & host).

To install all these prerequisites (and bind-utils) you can run a single command and get it done in one shot:

yum -y install bind-utils samba-client krb5-workstation samba-winbind-krb5-locator authconfig pam_krb5

Update Host Name and Hosts File

Since our server is in AWS the host name that it has is essentially the IP address, so we’ll want to change this to something that lines up with our standards and is a little more readable. This can be done by updating the /etc/sysconfig/network file. I love to do things via one command, so sed is our friend here. The following command can be used to update this file, make sure you update with your respective host name.

sed -i 's/localhost.localdomain/adtest1.addomain.local/g' /etc/sysconfig/network

You’ll now want to update your hosts file to resolve the local server properly. This is an important step for when we join the domain later on, as the DNS update performed once the domain join completes will look here to determine the FQDN of the server and update DNS appropriately. We’ll just add this on to the file that’s already there; again, adjust for your host name and IP address appropriately.

echo '10.145.97.224    adtest1.ADDOMAIN.LOCAL adtest1' >> /etc/hosts

After this is done reboot your server so the OS picks up this change. You might be able to get away with just cycling the network service, but a reboot will ensure everything gets picked up properly.

Authconfig

Now that you have your host name updated, it’s time to run your authconfig command. This will update the necessary config files in one command, easing the amount of manual updates that we’ll need to make. I’ll give you the command first (listed multiline for ease of reading), then will go through what each option means.

sudo authconfig \
 --disablecache \
 --enablewinbind \
 --enablewinbindauth \
 --smbsecurity=ads \
 --smbworkgroup=ADDOMAIN \
 --smbrealm=ADDOMAIN.LOCAL \
 --enablewinbindusedefaultdomain \
 --winbindtemplatehomedir=/home/%U \
 --winbindtemplateshell=/bin/bash \
 --enablekrb5 \
 --krb5realm=ADDOMAIN.LOCAL \
 --enablekrb5kdcdns \
 --enablekrb5realmdns \
 --enablelocauthorize \
 --enablemkhomedir \
 --enablepamaccess \
 --enablewinbindoffline \
 --updateall

–disablecache – This disables the name server cache daemon which is used for name caching. This is to prevent logon delays when logging on to the system or during other authorization attempts.
–enablewinbind – This enables winbind.
–enablewinbindauth – This enables authentication via winbind.
–smbsecurity=ads – This sets the security mode that winbind should use. In our case it’s set to ads, which is active directory authentication.
–smbworkgroup=ADDOMAIN – This is the domain name of our AD domain in its short name format. Note that this should be in all caps. I specifically had issues when this was lower case.
–smbrealm=ADDOMAIN.LOCAL – This is the domain name of our AD domain in it’s normal format. Same applies here, make it all caps.
–enablewinbindusedefaultdomain – Configured winbind to assume that users that don’t specify a domain name are in fact domain users.
–winbindtemplatehomedir=/home/%U – Specifies where to put users home directories.
–winbindtemplateshell=/bin/bash – Specifies the shell to configure users to use.
–enablekrb5 – Enables Kerberos authentication.
–krb5realm=ADDOMAIN.LOCAL – Specifies the Kerberos realm. Make this all caps also.
–enablekrb5kdcdns – Configures Kerberos to use DNS to locate domain controllers.
–enablekrb5realmdns – Configures Kerberos to use DNS to locate realms.
–enablelocauthorize – Permits logon using local credentials. Necessary if you still want to log on with root or any other local account.
–enablemkhomedir – Enables the creation of the users home directory if it doesn’t already exist when they log on.
–enablepamaccess – This enables the pam_access module execution during logon.
–enablewinbindoffline – Allows accounts to authenticate even if a DC is unreachable, using cached credentials. I wasn’t able to successfully test this out, so you may still need to rely on root if the DC’s are unavailable.
–updateall – Update all the config files with the appropriate information.
If this command is successful you should see winbind start up. If it’s not, it should give back any errors, so make sure things are good before proceeding.

Join AD Domain

So everything is good to this point, now it’s time to join the domain. This will join the domain and add the respective DNS entry. You’ll need to specify an account in this command that has privileges to add computers to the domain. Replace your domain name and account respectively.

net ads join ADDOMAIN.LOCAL -U administrator

This will prompt you for your domain password. If it’s successful you should get a response that you’ve joined the domain successfully and a DNS record has been created.

Update pam_winbind.conf

There’s a couple manual updates you’ll need to make to the /etc/security/pam_winbind.conf file to enable the home directory creationand set up what users can log on to the server. For the latter, you can specify multiple groups, but ensure they are in lower case (Linux FTW), are enclosed in quotes and are separated by a comma with no spaces. Again, we’re using sed to easily update the config file. See the example below and update appropriately with your groups.

sed -i 's/;mkhomedir = no/mkhomedir = yes/g' /etc/security/pam_winbind.conf
sed -i 's/;require_membership_of =/require_membership_of = "domain admins","linux admins"/g' /etc/security/pam_winbind.conf

Sudoers

Finally, you’ll likely want to allow users to sudo, so the following command can be used to add a group (Domain Admins in this case) to allow sudo. Under normal circumstances you’d use visudo, but since we’re updating one line and there’s likely no other users on the server updating sudoers, we’re pretty safe here.

echo '## Allows Domain Admins to run all commands' >> /etc/sudoers
echo '%domain admins ALL=(ALL)      ALL' >> /etc/sudoers

Start The winbind Service

Most likely the winbind service isn’t running yet, so go ahead and start it up.

service start winbind

You may also want to run a chkconfig to ensure that the winbind service is set to start up on boot. It should look similar to the following (starts up on runlevels 3, 4 & 5).

[root@adtest1 ~]# chkconfig --list winbind
winbind         0:off   1:off   2:off   3:on    4:on    5:on    6:off

<h3>Enable SSH login with password</h3>
On Amazon Linux, ssh keys are required by default for login to the system and login via password is explicitly disabled. In order for this to work you need to enable login with a password.

sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/g' /etc/ssh/sshd_config
sed -i 's/PasswordAuthentication no/#PasswordAuthentication no/g' /etc/ssh/sshd_config
service sshd restart

<h3 id="ActiveDirectoryAuthenticationForLinux-TestLogon">Test Logon</h3>
At this point you should be able to log on with AD credentails. Don't log off your existing session as root, since if something isn't set up right you could be locked out of your server entirely. Open up a new session and try to log on with your AD credentials. When you're on, try a sudo command and make sure that works as well.
<h2 id="ActiveDirectoryAuthenticationForLinux-ScriptedSetup">Scripted Setup</h2>
So the above commands can be done in a relatively simple set of commands. There are basically two sections, separated by a reboot. As mentioned before, update with your appropriate host names, credentials, IP address, etc.

# Install required packages
yum -y install bind-utils samba-client krb5-workstation samba-winbind-krb5-locator authconfig pam_krb5
# Update the host name
sed -i 's/localhost.localdomain/adtest1.addomain.local/g' /etc/sysconfig/network
# Update hosts file
echo '10.145.97.224 adtest1.ADDOMAIN.LOCAL adtest1' >> /etc/hosts
# Reboot
shutdown -r now
# Run authconfig
sudo authconfig --disablecache --enablewinbind --enablewinbindauth --smbsecurity=ads --smbworkgroup=ADDOMAIN --smbrealm=ADDOMAIN.LOCAL --enablewinbindusedefaultdomain --winbindtemplatehomedir=/home/%U --winbindtemplateshell=/bin/bash --enablekrb5 --krb5realm=ADDOMAIN.LOCAL --enablekrb5kdcdns --enablekrb5realmdns --enablelocauthorize --enablemkhomedir --enablepamaccess --enablewinbindoffline --updateall
# Join AD domain
net ads join ADDOMAIN.LOCAL -U administrator
# Update pam_winbind.conf
sed -i 's/;mkhomedir = no/mkhomedir = yes/g' /etc/security/pam_winbind.conf
sed -i 's/;require_membership_of =/require_membership_of = "domain admins","linux admins"/g' /etc/security/pam_winbind.conf
# Update sudoers
echo '## Allows Domain Admins to run all commands' >> /etc/sudoers
echo '%domain\ admins ALL=(ALL)      ALL' >> /etc/sudoers
# Start winbind
service winbind start
# Enable ssh with password
sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/g' /etc/ssh/sshd_config
sed -i 's/PasswordAuthentication no/#PasswordAuthentication no/g' /etc/ssh/sshd_config
service sshd restart

Troubleshooting

If you have issues with authentication following these steps, most of the authentication modules log to /var/log/secure. You can review this log to ensure that authentication is successful, and investigate any errors that might show up here.

The wbinfo command can be used to test communication to the domain controllers. If you run a wbinfo -u, it should output all the users in the domain. If this works, you’ve got the basic connectivity working, if it doesn’t, you’ve got issues.

If you look in your domain controllers, you should see both an AD computer account for your server as well as a DNS entry that matches the IP address of the server. Make sure that these are both there, if they’re not, something didn’t go quite right.

I mentioned it earlier, but the domain names in the authconfig command need to be in all caps. Don’t overlook this, as I wasn’t able to get it to work in lower case.

References

Below are the various blogs and web sites I referenced as I was going through the setup.

https://mikrocentillion.wordpress.com/2013/06/05/centos-6-authenticate-and-sudo-active-directory-users/

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Windows_Integration_Guide/winbind-auth.html

https://digitalchild.info/active-directory-authentication-with-centos/

http://kura2gurun.blogspot.com/2011/10/authentication-failure-using-ssh.html

http://www.slideshare.net/AshwinPawar/krb5

http://docs.fedoraproject.org/en-US/Fedora/14/html/Deployment_Guide/ch-Authentication_Configuration.html