Random Enough Passwords

OK, as a side duty to many of the roles I fill, I wind up installing and administering countless small apps, vms, and physical machines. I don’t want a system I created to be hacked because 1234 was a secure enough password. One side effect of this is that I must now use a password manager and back it up. God help me if I loose that file. Additionally, this has caused my internal entropy for generating passwords to drop to 0. In other words, I’m tired of thinking of random passwords. Thanks to an article here: http://blog.colovirt.com/2009/01/07/linux-generating-strong-passwords-using-randomurandom/ , now I don’t have to. To sum it up:


#!/bin/bash
PASSLEN=5
cat /dev/random| tr -dc 'a-zA-Z0-9-_!@#$%^&*()_+{}|:<>?='|fold -w $PASSLEN| head -n 4| grep -i '[!@#$%^&*()_+{}|:<>?=]'

This script will produce a length 5 password and give you 4 different passwords to choose from. This will generate a really random set of passwords, but you must generate entropy by using your system. If this is to slow for you use /dev/urandom instead since random blocks until things are random enough. This is not statically perfect so don’t use this for anything requiring true randomness. If you don’t know what /dev/random or /dev/urandom are this is not the post for you.

-Glenn

svn L&P testing in 10 lines or less

I needed a quantifiable test that can measure svn performance during a check out. This script take 2 arguments, number of checkouts and parallelism. For example if I want to run 100 checkout 2 at a time ./load.sh 100 2 or 100 checkouts 50 at a time ./load.sh 100 50.


#!/bin/bash
i=0;
url="http://mysvnrepo"
while [ $i -lt $1 ] ; do mkdir $i; let i=$i+1; done
DATE=`date +%m%d%y%H%M%S`;
find -type d ! -name . -maxdepth 1 2> /dev/null | sed "s/\.\///g" | xargs -I'{}' -P$2 time -o {}/time.dat svn co $url {}
find -iname time.dat -exec cat {} >> total_$1_$2_$DATE.dat \;
cat total_$1_$2_$DATE.dat | grep -v swaps | sed "s/user /\t/g" | sed "s/system /\t/g" | sed "s/elapsed.*//g" | sort -n > res_$1_$2_$DATE.csv

i=0;
while [ $i -lt $1 ] ; do rm -rf $i; let i=$i+1; done

 

The results are recorded in a file with both test parameters and the date. A little bit of sed magic and you can create a csv which will make pretty graphs in excel or libre office calc. Enjoy.
-Glenn

Posted in L&P

Short intro to Puppet

Puppet can be daunting at first. Here is a quick explanation of the most important elements I found useful when first being introduced to puppet.

Configuration Language

The configuration language reference can be currently found at http://docs.puppetlabs.com/puppet/2.7/reference/ and http://docs.puppetlabs.com/puppet/3/reference/index.html . There are several parts to the puppet language. What follows is a quick description of the items most useful to me: facter, certain directives, augeas, and stages.

Facter

Facter is a simple name value pair of configuration items. For example

swapfree => 0.00 kB
swapsize => 0.00 kB
timezone => Local time zone must be set--see zic manual page
uniqueid => d40a1a41
uptime => 37 days
uptime_days => 37
uptime_hours => 910
uptime_seconds => 3276155

There are a set of default facts you get when you install facter which are generally useful, cpu type, number of cpus, hypervisor info, OS type, etc. This catalogue of facts is used to identify the machine in your configuration management database. Additionally, these facts are all accessable as variables inside your puppet code.

Lastly regarding facter you can create custom facts. Custom facts are authored in ruby. You create a small ruby program that returns a string representing the answer to a complicated calculation. Every time puppet is run on this machine, the fact will be calculated.

Puppet Language Highlights

Puppet allows you to describe an intended system state. It does this by creating resources that represent the major pieces of a running OS. Packages, Services, Files, Users, Groups, Host, and Mounts are just some examples of resource types.

File

Typically you are looking at files to control a machines configuration. That said puppet gives you two main ways to deal with these configuration file, copy them wholesale from a repository or generate them from a template based on facts.

file {
"/etc/sshd/sshd_config" :
content => template("openssh/sshd_config.erb"),
require => Package["openssh"]
}

This statement will create an sshd_config file in /etc/sshd based on the template sshd_config.erb. The template files are plain text except for the variables. For example:

ListenAddress <%= ipaddress %>

This allows the puppet class to create a correct config file based on the current environment or facts pre-populated. Dynamically generating the configuration allows the system to be right for where ever it is.

Services

Services are system level programs that run in the background. Generally these are services in the microsoft world or daemons in the UNIX world. The Service resource is very similar to the file resource in structure.

service {
"syslog":
enable => "true",
ensure => "running",
hasstatus => "true",
require => File["syslog.conf"],
subscribe => File["syslog.conf"]
}
This makes syslog start at boot, run if its not currently running, is able to return status, has its config file, and will automatically restart if the config file changes. This class can be applied to a running system and have the changes reflected immediately and correctly. If this resource is constructed correctly, a service will know of all its dependencies and react accordingly.

Packages

Packages are how an OS manages an installable feature. In UNIX land typically this refers to the RPM manager, yum, yast etc. In Gentoo its the emerge system and ebuilds, in the microsoft world this is the package installers.

 

Above is an example of puppet installing a package.

Augeas

While Augeas is technically a resource type, its a quite complicated way of manipulating configuration files. Basically, augeas hold a grammar for each config file you may be interested in, httpd.conf, sshd_config, etc. Augeas loads the parse tree into memory and allows you to manipulate the parse tree to add syntactically correct  statements. For example, you can insert changes in one fell swoop in a php.ini file without any complicated text manipulation.


augeas {
"php.ini":
notify => Service[httpd],
require => Package[php],
context => "/files/etc/php.ini/PHP",
changes => [
"set post_max_size 10M",
"set upload_max_filesize 10M",
];
}

This changes just those two values in the file without any sed or perl magic. This can be applied to more complicated objects like httpd.conf virtual hosts and other directives. Augeas is the most reliable way to change a file you don’t want to templateize.

 

AWS updates on the cheap

Gentoo Build Server/Compute

Intro and Prep

Gentoo compiles everything. After I uploaded my gentoo custom image I realized updating on a 634MB 1 virtual cpu machine may be a bad thing. I would hit the IO limit or the CPU limit and actually have to pay money, ugh. From rich0’s blog lets assume your first instance is up and running, has been for a while, and now needs an update, so keep the chrooted environment you created in step 7.

Exposing your update server

Now you need to have an apache web server set up. It does not need to be exposed to the internet, but that makes it easier. Go into the apache document root and create a symlink into the chrooted environment. Specifically where emerge and quickpkg leave their package files. In my case I did this:
ln -s bounce-bin /home/binserver/bounce/home/portage/distfiles/

Where /home/binserver/bounce is my chrooted environment and /home/portage/distfiles/ is my PKGDIR in make.conf. If you chose the default I believe it would be usr/portage/distfiles in your chroot. Next we need to expose the portage directory of the update server via rsync. First install rsync on the same box as your apache server. Second, share the portage tree of your chroot. For me I have the following configuration in rsync.conf:

[bounce-bin]
path = /home/binserver/bounce/usr/portage
comment = Bin server for outpost.

This is an important point, it will make your EC2 instance sync with your update server instead of the gentoo tree. This keeps your generated package versions in sync with the EC2 portage tree.Your update server should now be ready to go. If you have an internet connection that allows rsync and http make sure they are mapped to the right ports.

Lastly, chroot to the update environment, mount the special file systems and update. For me that process is:

cd <chroot env>
mount -t proc none <chroot env>/proc
mount -o bind /dev <chroot env>/dev
chroot <chroot env> /bin/bash
emerge --sync
emerge -avuNDb world
cd /usr/portage
find /usr/portage -maxdepth 2 -type d | sed -e 's/\/usr\/portage\/.*\///g' | xargs -P1 quickpkg --include-config=y
chown -R apache:apache /home/portage/distfiles

Now we are ready to update.

The EC2 image

Now you need to make your make.conf file point to your update server. Go to whatsmyip.org and get your external ip. In make.conf :

PORTAGE_BINHOST="http://<external ip>/binhost/bounce-bin"
SYNC="rsync://<external ip>/gentoo-portage"

Now on the EC2 image:

emerge --sync
emerge -avGK world

This should download all your updates and install them. Hopefully you are now up to date!

Isolated Compute Server

You have two options, tar or ssh port forwarding magic. If you choose the tar option, you are basically coping the compute server’s portage tree and package directory up the the EC2 instance. If you use the ssh forwarding method, you are basically substituting the <external ip> in the make.conf example for localhost.

SSH Forwarding

This method is preferred, if you the tar method and forget a package you need to move another binary up to the EC2 server. If you use ssh tunnelling, you just run emerge again. In order for ssh forwarding to work  an ssh server needs to be running on the EC2 instance, the EC2 instance must have an addressable ip from your compute server(public or internal vpn) and the ssh port must be allowed through all firewalls between the compute server and the EC2 instance. Once all the prerequisites are met you can do the following:

compute server> ssh -R 873:localhost:873 -R 80:localhost:80 root@ec2tinstance
emerge --sync
emerge -avuNDGK world

Tar method

On the compute server:


cd /
tar cvf /tmp/portage.tar /usr/portage /home/portage/distfiles

Then move the tar ball up to the ec2 server.

cd /
tar xvf portage.tar
emerge -avuNDGK world

-Glenn

Posted in AWS

Welcome to “the cloud”

I have been doing configuration management for over 10 years. With the explosion of “the cloud”, configuration management has moved from an exercise in careful planning and policy to an engineering effort. Most enterprises are slow to pick up on this shift in the industry. This blog and the next series of approaches details several case studies and practical examples.  I hope you enjoy the ride.

-Glenn

 

Posted in AWS