Who's checking in, the mcollective trick.

This keeps coming up so I thought I'd share one trick we've used to figure out if there are stale nodes out there. These are nodes that are failing to update for various reasons that won't be reported in your reporting mechanism. One of the common causes is an expired or revoked certificate. The agent never gets far enough to report a failure.

mcollective and activemq the 800 node limit

I've been running into the 800 node limit on mcollective and splitting up my nodes into subcollectives. I had a spot where I couldn't split up the nodes, so I started looking at why we were hitting this 800 node wall.

Foreman ruby api adding node to hostgroup or puppetclasses to node

I wanted to apply puppet classes to a node using a script, I started looking at the foreman REST API but stumbled upon the foreman_api ruby. I specified hostgroups in foreman and added puppet classes to the hostgroups. The idea is that I want to be able to change the hostgroups using a script.

Running things through irb this is what I came up with for changing the hostgroups.
 

Why is everyone using sudo wrong? Or is it me?

While configuring OMD (yes, Orchestral Manoeuvers in the Dark, no, not really) I ran into a point at which apache was supposed to run as the OMD user for check_mk. Hard coded into the check_mk configuration is a call to

sudo su - <omduser> check_mk -c check_mk\ --automation\ *

making an xml of facts on the system

I'm not sure of the utility of this, but maybe it'll be useful to someone else. I was requested to output all the facts from a system in xml, not wanting to type much I made the following script...

#!/usr/bin/env ruby
 
require 'facter'
require 'rubygems'
require 'activesupport'
 
Facter.loadfacts
facts = {}
 
for fact in Facter.list.sort

Using iptables to proxy a port on a remote machine on a different network

Scenario

machine A (192.168.100.1) provides resource A on port 8888
machine B (192.168.200.1) needs to access resource A

without modifying machine B (not allowed), create machine C and have any traffic to machine C on port 8888 forwarded to machine A. Then tell machine B that machine C is machine A and nobody is the wiser. None of the examples I found online had this working properly.

rsync between hosts using commands embedded into authorized_keys (ssh-keys)

I routinely used to transfer data between systems using rsync. Since I wanted the communication to be secure I used ssh-keys, I noticed that my trick for using a command in the key isn't terribly well documented, so here is how I do it...

Goal: Keep /opt/before on machine B in sync with /opt/after on machine A.

On machine A, create an ssh key for this
 

sudo -iu not working as expected

I was trying to allow a user to sudo to another account and run a specific command. I'm not a fan of getting them to run through su since it doesn't make much sense to involve a third tool in the equation. I could get it working with the following:

theiruser    ALL=(runasuser) NOPASSWD:/usr/local/bin/script.sh

The user could run script.sh with sudo -u runasuser /usr/local/bin/script.sh and it worked as expected but if they tried sudo -iu runasuser /usr/local/bin/script.sh they got prompted for a password as the command didn't match.

semodule - global requirements not met

Trying to fix an issue with snmp, I started by building an snmp module using audit2allow. It kept failing to load, and the error message is a little cryptic...

[root@host thomas]# semodule -i snmp.pp
libsepol.print_missing_requirements: snmp's global requirements were not met: type/attribute snmpd_t (No such file or directory).

TIL changing security limits on a running process (increasing nofile max open files without restarting process)

Had an sssd process spinning and using 100% cpu. Did an strace on it and saw that it was complaining about too many open files.

pid accept(24, 0xaddress, [110]) = -1 EMFILE (Too many open files)

getting the number of open files for the process.
# lsof -p $(pidof sssd_pam) |wc -l
1065