I need to download a script from github but I don't have git on the windows machines, on Linux I just used curl -u, for windows it needed more than a one liner.
Here's what I came up with, we have self signed certs so I need to fool System.Net into thinking all certs are good, that's a one liner:

[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}

Next I create a WebClient object and set two headers:

$client = New-Object System.Net.WebClient
$client.Headers.Add("Authorization","token 1234567890notarealtoken987654321")
$client.Headers.Add("Accept","application/vnd.github.v3.raw")

I set up an OAuth token for github earlier, so I set that as my authorization to login to github enterprise. The second header tells github that I'm ok with raw files (if you don't do this, you get back a json)

Finally, I download the file.

$client.DownloadFile("https://github.company.com/api/v3/repos/me/my_repo/contents/test/test-script.ps1?ref=branch_I_need","C:\temp\test-script.ps1”)

You can probably roll this up into a one liner and get rid of the $client object, but this is much more readable. Here's everything together.


[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}
$client = New-Object System.Net.WebClient
$client.Headers.Add("Authorization","token 1234567890notarealtoken987654321")
$client.Headers.Add("Accept","application/vnd.github.v3.raw")
$client.DownloadFile("https://github.company.com/api/v3/repos/me/my_repo/contents/test/test-script.ps1?ref=branch_I_need","C:\temp\test-script.ps1”)

A little more work than

curl -u me -k https://github.company.com/api/v3/repos/me/my_repo/contents/test/test-script.ps1 >test-script.ps1

Tutorial I gave at LISA 2014 http://goo.gl/G0TLfJ

This is a talk about running puppet in the enterprise, or at scale. The original title of Mastering Puppet was Puppet in the Enterprise, the talk was to present the ideas in the book.

The tutorial files are located on github at: https://github.com/uphillian/lisa2014

I uploaded the lisa2014.iso Live image used in the demo to dropbox, get it here https://www.dropbox.com/s/r15svwvtnrq8t0k/lisa2014.iso?dl=0

This keeps coming up so I thought I'd share one trick we've used to figure out if there are stale nodes out there. These are nodes that are failing to update for various reasons that won't be reported in your reporting mechanism. One of the common causes is an expired or revoked certificate. The agent never gets far enough to report a failure.

In these cases, provided mcollective was running and configured on the node, you may still see the node in mcollective and think everything is fine. If you have a small enough implementation you can probably track down these hosts one by one, but this is how we do it with a few thousand nodes. I'm assuming you are configuring mcollective from puppet (this won't work if you aren't).

Go into your activemq configuration and add a new authorizationEntry for a new collective, call it whatever you like.

" write="mcollective" read="mcollective" admin="mcollective" />
" write="mcollective" read="mcollective" admin="mcollective" />

Now go into your mcollective server configuration and edit the main_collective and collectives settings.

main_collective = stalecollective
collectives = stalecollective,mcollective

Sit back and wait, I usually use the default checkin interval of 30 minutes, so waiting 60 minutes or so works well. Now run mco again against the new collective (edit your client.cfg or ~/.mcollective)

mco find -T stalecollective -v

You should see only your active hosts now. Possibly more interesting, run mco against the original collective and see the stale hosts

mco find -T mcollective -v

If you have hosts that checkin less frequently you might get a few false positives but this will still be a good starting point to find the nodes that aren't updating their configurations.

Mastering Puppet

I've been running into the 800 node limit on mcollective and splitting up my nodes into subcollectives. I had a spot where I couldn't split up the nodes, so I started looking at why we were hitting this 800 node wall.

I'm using activemq with the ssl plugin, after turning on all the debugging I could find in activemq, it turns out it's just a simple resource limit problem.

With activemq running, I waited for my nodes to connect and watched the number of threads on the active java process. (This is after increasing the memory limits for activemq as described on puppetlabs website.

Getting the number of threads, two different ways.

$ pgrep java |xargs ps uH |wc -l
1023
$ pgrep java |xargs -I % ls -l /proc/%/task |wc -l
1023

Either way we are seeing around 1024 processes (threads), looks suspiciously like a limit. I increased the limit in /etc/security/limits.d/activemq.conf

activemq soft nofile 16384
activemq hard nofile 16384
activemq soft nproc 4096
activemq hard nproc 4096

Not really sure if the nofile limit is required, but nproc seems to fix my issue.
After restarting activemq

$ pgrep java |xargs ps uH |wc -l
1530
$ pgrep java |xargs -I % ls -l /proc/%/task |wc -l
1530

The number of nodes returned by mco find goes from a random result in the 800-1000 range to the 1400 or so that I was expecting.

I'm going to have to update my section on mcollective in my book

I wanted to apply puppet classes to a node using a script, I started looking at the foreman REST API but stumbled upon the foreman_api ruby. I specified hostgroups in foreman and added puppet classes to the hostgroups. The idea is that I want to be able to change the hostgroups using a script.

Running things through irb this is what I came up with for changing the hostgroups.

#!/usr/bin/ruby

require 'rubygems'
require 'foreman_api'

hostname='node1.example.com'

hosts = ForemanApi::Resources::Host.new(
{
:base_url => 'http://foreman.example.com',
:username => 'apiuser',
:password => 'PacktPub'
})

hosts.update({"id" => hostname, "host" => {"hostgroup_id"=>2}})

Hammer-cli looks a little more promising for this sort of programmed changing of foreman parameters. With hammer I was able to add arbitrary classes to a node.


$ hammer host update --name node1.example.com --puppetclass-ids 14,35
Host updated
$

With either method I need to know the id's of the hostgroups or the puppetclasses.
I can get those with either though, but I still need to work them out first.


$ hammer puppet_class list
-------------------------------
ID | NAME
-------------------------------
14 | example_one
35 | example_two
-------------------------------
$

While configuring OMD (yes, Orchestral Manoeuvers in the Dark, no, not really) I ran into a point at which apache was supposed to run as the OMD user for check_mk. Hard coded into the check_mk configuration is a call to

sudo su - check_mk -c check_mk\ --automation\ *

I've seen this many times, sysadmins doing sudo su -, but why? It reminds me of an admin who would routinely do cat file | less, or my favourite cat file | grep something | wc -l when grep -c something file would be just about as nice. The other problem I have with getting su in the middle is that you could have much cleaner sudoer files. I might be in a minority of opinion though, the man page for sudo has examples with su in them, why, I don't know.

In the above code we are saying, sudo to root, run su as root, then tell su to drop to a specific user and run a command given with -c, we could have just as easily said:

apache ALL=(omduser) NOPASSWD:SETENV: ALL

Then changed the command run to this:

sudo -u omduser check_mk --automation --

This is fundamentally different, We aren't asking to run anything as root, we drop privs much sooner in the game and become the omduser to run the command instead of running su as root and relying on su to drop privs. Not only that, we have just allowed the apache user to run anything as omduser, so if we need to run some other omd command later, it's already covered. We should lock it down true, but I would feel a lot better about a wildcard in that last sudoers, the absolute worst case would be that apache could run an arbitrary command as omduser.

sudo su - I know I've typed it, but it's just wrong, I should be doing

sudo -i

It must be correct, it has less characters in it.

So I started looking and I couldn't find a best practices doc for using sudo. Until I find it, I thought I would start something here from what limited experience I have. I present my version of...

Best practices for sudo

  • Always edit sudoers with visudo
  • If your sudo supports sudoers.d, use it.
  • If UserA needs to run commands as UserB, use the Runas_Spec to allow UserA to run as UserB

    UserA ALL = (UserB) ALL
  • Use wildcards sparingly and only in arguments


    UserA ALL = (ALL) /opt/vendor/version-*/bin/something
    UserA ALL = (ALL) /opt/vendor/version-11.0.1/bin/something *

    The first rule is more dangerous than the second, the command

    sudo /opt/vendor/version-11.0.1/../../../usr/bin/vim

    Matches the wildcard and permits UserA to run vim as root. The second rule allows UserA to run /opt/vendora/version-11.0.1/bin/something with any arbitrary arguments, so provided we trust that the command 'something' doesn't have any shell escapes, we should be ok.
  • Only allow the use of sudoedit for editing files. Most editors provide shell escapes. If you permit a user to run vim for instance, they can execute any command using :!
  • Only allow users to run commands as root in paths in which they do not have write access. If UserA can edit '/opt/vendora/version-11.0.1/bin/something', then UserA cannot be allowed to execute 'something' as root.
  • If any user is allowed root access, then no other user should be allowed open access to that user. This is known as user hopping.

    UserA ALL = (ALL) NOPASSWD: ALL
    UserB ALL = (UserA) NOPASSWD: ALL

    UserB can run any command as UserA, but UserA can run any command as root. So to get a root shell, all UserB has to do is sudo -u userA sudo bash

    UserB@host ~ $ sudo -u UserA sudo bash
    root@host /home/UserB $
  • Never forget that Runas_spec has a group component...

    UserA ALL = (:wheel) NOPASSWD: ALL


    root@host /etc/sudoers.d $ sudo -iu UserA
    UserA@host ~ $ touch hello
    UserA@host ~ $ sudo -g wheel touch there
    UserA@host ~ $ ls -l hello there
    -rw-rw-r--. 1 UserA UserA 0 Jan 28 22:35 hello
    -rw-r--r--. 1 UserA wheel 0 Jan 28 22:35 there
  • Teach your users to use sudo -l

I bet this is written down somewhere famous though...

Other run things I noticed...useradd UserA creates UserA, not usera, it's a distinctly different user...rabbithole approaching...

root@host $ id usera
id: usera: no such user
root@host $ id UserA
uid=1003(UserA) gid=1003(UserA) groups=1003(UserA)

I'm not sure of the utility of this, but maybe it'll be useful to someone else. I was requested to output all the facts from a system in xml, not wanting to type much I made the following script...


#!/usr/bin/env ruby

require 'facter'
require 'rubygems'
require 'activesupport'

Facter.loadfacts
facts = {}

for fact in Facter.list.sort
facts[fact] = Facter.value(fact)
end

xml = facts.to_xml(:root => "facts")

print xml

The output looks like the following:


<?xml version="1.0" encoding="UTF-8"?>

13139767
...
4
lmnbsd01094d02

127.0.0.0

Tikanga
x86_64

Possibly useful someday...I require activesupport, so you'll need to install that gem. That gem gives you the to_xml method for hashes.

Enjoy.

Scenario

machine A (192.168.100.1) provides resource A on port 8888
machine B (192.168.200.1) needs to access resource A

without modifying machine B (not allowed), create machine C and have any traffic to machine C on port 8888 forwarded to machine A. Then tell machine B that machine C is machine A and nobody is the wiser. None of the examples I found online had this working properly.

I eventually came up with the following.
machine C has two interfaces:
one that is on the same network as machine B - 192.168.200.2 eth0
one that is on the same network as machine A - eth1


*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [4698:1663560]
-A FORWARD -i eth0 -o eth1 -p tcp -m tcp --dport 8888 -j ACCEPT
-A FORWARD -i eth1 -o eth0 -m state --state ESTABLISHED -j ACCEPT
-A FORWARD -j DROP
*nat
:PREROUTING ACCEPT [0:0]
:POSTROUTING ACCEPT [28:2328]
:OUTPUT ACCEPT [831:65096]
-A PREROUTING -d 192.168.200.2 -p tcp -m tcp --dport 8888 -J dnat --TO-DESTINATION 192.168.100.1:8888
-a postrouting -o eth1 -j MASQUERADE
COMMIT

So we allow from eth0 to eth1 on port 8888 with the forward rule which then passes to the nat table for masquerading. The thing that most blogs are missing is the reverse connection from eth1 to eth0 with the established traffic. Without that you can connect and send data but nothing ever comes back. You have to enable ip_filter with sysctl as usual. You might need to enable rp_filter, but I didn't.

I routinely used to transfer data between systems using rsync. Since I wanted the communication to be secure I used ssh-keys, I noticed that my trick for using a command in the key isn't terribly well documented, so here is how I do it...

Goal: Keep /opt/before on machine B in sync with /opt/after on machine A.

On machine A, create an ssh key for this


$ ssh-keygen -f id_rsync

Copy id_rsync.pub from machine A to machine B, create an rsync account for the transfer, place the key into the authorized_keys file on machine B. Add a command to the key so we can transfer the command sent from machine A. We'll be taking the captured command and replacing it in the key later. This way we don't have to work out the options that rsync wants at the receiving end.

~rsync/.ssh/authorized_keys on machine B


command="echo `date` $SSH_ORIGINAL_COMMAND >> ssh.log && exec $SSH_ORIGINAL_COMMAND" ssh-rsa AAAAnotmyrealkeysadly thomas@machineA

Now on machine A


$ rsync -e 'ssh -i id_rsync' -avc /opt/before/ rsync@machineB:/opt/after
./
auth.conf
hiera.yaml -> /etc/hiera.yaml
puppet.conf
modules/

sent 5258 bytes received 61 bytes 3546.00 bytes/sec
total size is 5001 speedup is 0.94

Now on machine B we can look at the contents of the ssh.log file in ~rsync's home directory.

Tue Dec 3 01:34:41 EST 2013 rsync --server -vlogDtprce.iLsf . /opt/after

Cool, now we just have to take that rsync --server part and put that in our key.

~rsync/.ssh/authorized_keys on machine B


command="rsync --server -vlogDtprce.iLsf . /opt/after" ssh-rsa AAAAnotmyrealkeysadly thomas@machineA

Additionally we can add a from clause to make sure that only machineA can send to machineB using this key.

~rsync/.ssh/authorized_keys on machine B


command="rsync --server -vlogDtprce.iLsf . /opt/after",from="machineA" ssh-rsa AAAAnotmyrealkeysadly thomas@machineA

Incidently, if you use this syntax in the keys, you'll get this helpful message in /var/log/secure when you try from the wrong machine...

Dec 3 01:42:57 machineB sshd[22717]: Authentication tried for rsync with correct key but not from a permitted host (host=machineC, ip=192.168.100.1).

I was trying to allow a user to sudo to another account and run a specific command. I'm not a fan of getting them to run through su since it doesn't make much sense to involve a third tool in the equation. I could get it working with the following:


theiruser ALL=(runasuser) NOPASSWD:/usr/local/bin/script.sh

The user could run script.sh with sudo -u runasuser /usr/local/bin/script.sh and it worked as expected but if they tried sudo -iu runasuser /usr/local/bin/script.sh they got prompted for a password as the command didn't match.

I found out that the -i option runs the command through their login shell with a -c option, so in the instance of this user, /bin/bash. So I just had to change the sudoers to this:


theiruser ALL=(runasuser) NOPASSWD:/bin/bash -c /usr/local/bin/script.sh

After that
sudo -iu runasuser /usr/local/bin/script.sh
works as expected.

https://t.co/AGeihMALAv configuring grub2 with EFI Fri Sep 13 05:20:01 +0000 2019

I published a Thing on @thingiverse! https://t.co/IYpRyEb7Hz #thingalert Tue Jul 23 19:27:57 +0000 2019

Nokogiri install on MacOSX https://t.co/v3An0miW9L Fri Jul 12 15:06:49 +0000 2019

HTML email with plain mailer plugin on Jenkins https://t.co/Z6FSDMDjy8 Thu Jul 11 21:07:25 +0000 2019

git sparse checkout within Jenkinsfile https://t.co/tcL7V8mzFK Thu Jul 11 20:40:53 +0000 2019