I wanted to be able to define nfs shares with puppet and have puppet take care of the exports line and exportfs. I found this page, but I didn't like that there was perl code thrown in the midst. I rewrote it to use an inline template instead.

The code is up on github at github.com/uphillian/puppet-nfsshare

The main difference is using an inline template to create the options to set.

$options_set = inline_template(" nfsopts.each do |opt| -set dir[.= \"= @nfsshare \"]/client[.=\"= @nfsaccess \"]/option[.=\"= opt \"] = opt\n end ")

I wanted to configure rate limiting on our exim server and needed to setup an ACL that I could include in multiple spots in the configuration. The keyword acl = acl_name is supported (referred to as nested ACL's) but the logic took a little bit of thought...

I want to ratelimit users, the acl for that is here:

warn authenticated = *
ratelimit = 50 / 1h / strict / $authenticated_id
message = Your account has sent over 50 messages per hour, the hourly limit is 100 - please contact support help@example.com to change this limit
log_message = WARN USER RATE EXCEEDED: $authenticated_id -> $sender_rate/$sender_rate_period (max 500)

deny authenticated = *
ratelimit = 100 / 1h / strict / $authenticated_id
message = Your account has sent over 100 messages per hour - please contact support help@example.com to change this limit
log_message = RATE USER EXCEEDED: $authenticated_id -> $sender_rate/$sender_rate_period (max $sender_

Now I wanted to turn this into a named acl and include it elsewhere. The logic of this tripped me up.

deny acl = acl_ratelimit_user

warn authenticated = *
ratelimit = 50 / 1h / strict / $authenticated_id
message = Your account has sent over 50 messages per hour, the hourly limit is 100 - please contact support help@example.com to change this limit
log_message = WARN USER RATE EXCEEDED: $authenticated_id -> $sender_rate/$sender_rate_period (max 500)

deny authenticated = *
ratelimit = 100 / 1h / strict / $authenticated_id
message = Your account has sent over 100 messages per day - please contact support help@exmaple.com to change this limit
log_message = RATE USER EXCEEDED: $authenticated_id -> $sender_rate/$sender_rate_period (max $sender_rate_limit)

The problem here is that I have

deny acl = acl_ratelimit_user

But the logic of this reads like this:
deny if the condition is true (accept). So I need to change the acl so that anything it accepts, I will deny. Anything that is denied in the acl results in acl = acl_ratelimit_user being false which means the deny won't apply and therefore the mail will be allowed. Once I understood that, it was easy enough to change the logic, the final solution is:

deny acl = acl_ratelimit_user

# since we are including this acl elsewhere as a condition, we need it to return accept (true) when we want the top acl to deny.
warn authenticated = *
ratelimit = 50 / 1h / strict / $authenticated_id
message = Your account has sent over 50 messages per hour, the daily limit is 100 - please contact support help@example.com to change this limit
log_message = WARN USER RATE EXCEEDED: $authenticated_id -> $sender_rate/$sender_rate_period (max 500)

accept authenticated = *
ratelimit = 100 / 1h / strict / $authenticated_id
message = Your account has sent over 100 messages per hour - please contact support help@example.com to change this limit
log_message = RATE USER EXCEEDED: $authenticated_id -> $sender_rate/$sender_rate_period (max $sender_rate_limit)

I also made a rule to limit hosts, the important thing to remember there is that your smart hosts or incoming hosts should be excluded. Again, the logic for that must be reversed. You need to deny a list of hosts in order for the acl to fail, which results in a pass higher up.

warn ratelimit = 75 / 1h / strict
message = This machine has sent over 75 messages per hour, the hourly limit is 150 - please contact support help@example.com to change this limit
log_message = WARN HOST RATE EXCEEDED: $sender_host_address -> $sender_rate/$sender_rate_period (max 500)

deny hosts = : :
accept ratelimit = 150 / 1h / strict
message = This machine has sent over 150 messages per hour - please contact support help@example.com to change this limit
log_message = RATE HOST EXCEEDED: $sender_host_address -> $sender_rate/$sender_rate_period (max $sender_rate_limit)

I ran into a snag when trying to update the bios on my PE R210ii, the Bios update is 8MB, too large to fit on a floppy image, so I needed to make a floppy image large enough for the file. I tried increasing the size of the Dell floppy image and that didn't work, so I had to start from scratch with a boot floppy made from windows. Funny enough the first link that came back in my google search was that of my coworker.

Here are the steps I took to get the large boot floppy working with PXE.

  • create boot floppy from Windows Server 2008 in a vm. I suspect using a boot image from bootdisk.com would work just as well.
  • expand the floppy image using newmkfloppyimg.sh. I chose to make a 20MB floppy.

    sudo ./newmkfloppyimg.sh 20 bigfloppy.img floppy.img

    This script requires you have mkdosfs, dd and mktemp, you need root to be able to mount the old and new floppy images and copy the contents. The key here is that it uses mkdosfs to make a new larger filesystem image (mkdosfs -I -v -C newfile size). Then it mounts the filesystem, copies the files from the old image to the new one. Finally it copies the boot sector from the original to the new using dd (that's the part I figured out manually before finding the script :-( )

    dd if=$OLDIMAGE of=$NEWIMAGE bs=1 count=10 conv=notrunc 2>/dev/null
    dd if=$OLDIMAGE of=$NEWIMAGE bs=1 skip=61 seek=61 conv=notrunc count=451 2>/dev/null

  • next I mounted the new image and copied the 8MB dell bios upgrade file onto the filesystem.

    [thomas@install: ~] $ mkdir /tmp/bigfloppy
    [thomas@install: ~] $ sudo mount -o loop bigfloppy.img /tmp/bigfloppy
    [thomas@install: ~] $ ls -lh PER210II_020005.exe
    -rw-rw-r-- 1 thomas thomas 8.2M Sep 13 08:47 PER210II_020005.exe
    [thomas@install: ~] $ sudo cp PER210II_020005.exe /tmp/bigfloppy/
    [thomas@install: ~] $ sudo umount /tmp/bigfloppy

  • Install syslinux and copy memdisk into the tftpboot directory.
  • Create a menu entry in our pxelinux config file.

    MENU PASSWD superdupersecretpasswordnoguessing
    LABEL per20ii
    MENU label PE-R210ii
    kernel memdisk
    initrd bios/r210ii.img
    append floppy

    You can append to the "append floppy" line the number of cylinders, heads and sectors but my memdisk was able to guess them properly so I didn't.

  • PXEBoot the machine, select the Bios menu and enter the superdupersecret and you are off to the races.

My PowerEdge R210ii wouldn't boot with a QLogic 8GB Fibre channel card installed (just a cursor in the corner). After applying this Bios update the card is working great.

I've run into this enough times that I thought I'd write a little script to do the work for me.
It's just a simple one line call to python, but I wrapped it with some argument parsing.

It's a python script that takes a password and returns the salted sha512 hash by default. sha256 and md5 can be specified with switches.

The script is hosted at github here.

We maintain multiple repositories, many of which have the same rpms repeated in different locations. In order to save space we use hardlinking extensively. The hardlink command does an ok job of finding things to link, but we are only really interested in rpms and don't want repomod.xml or comps getting linked so I wrote a little python script to hard link all the rpms based on a checksum. The script is over at my github. hardlink_rpms

When using an mdbox or maildir mailbox, you need to use the dovecot-lda to deliver the message and not allow procmail to do this directly. I had a few issues getting this to work properly, here are the details of getting it going.

The first issue is that my procmail transport in exim needed to set user to the local_part and do initgroups to get the uid and gid of the localuser, then run procmail as that user/group. Here is that transport:

driver = pipe
command = /usr/bin/procmail -d $local_part
check_string = "From "
escape_string = ">From "
user = $local_part

The check_string and escape_string parts are from the manual, they just work around a bug in procmail where it would make a mistake if it saw a line starting with From.

Next I had to get deliver to do the right thing, in my case that meant taking off the group and user parts of the lda in dovecot.conf, so I made a separate dovecot-lda.conf with those settings commented out, the lda part of my config is also included.

#mail_privileged_group = mail
#mail_gid =12

protocol lda {
mail_plugins = $mail_plugins sieve
lda_mailbox_autocreate = yes
lda_mailbox_autosubscribe = yes
log_path = /var/log/dovecot/deliver-errors.log
info_log_path = /var/log/dovecot/deliver.log

The postmaster_address is needed so that if there are errors with submission via lda, the errors will be delivered to this address, similarly we add log lines. We use sieve here so we append sieve to the plugins line. The next two lines are important, lda_mailbox_autocreate and lda_mailbox_autosubscribe. If the call to deliver names a mailbox that doesn't exist yet, deliver will create it. Without this line the mail fails if the box doesn't exist. The autosubscribe is there so that the box shows up in the users list after creation.

With a new config file, I wrote a small script that calls dovecot-lda with the appropriate config file.

<br />
<p>exec /usr/libexec/dovecot/dovecot-lda -c /etc/dovecot/dovecot-lda.conf $@<br />

I wasn't kidding about small...that's all it needs to do. I store this in /usr/bin/deliver and make sure it's selinux type is procmail_exec_t

<br />
[<a href="mailto:thomas@example">thomas@example</a>: ~] $ ls -Z /usr/bin/deliver<br />
-rwxr-xr-x. root root system_u:object_r:procmail_exec_t:s0 /usr/bin/deliver<br />

With these settings in place, you can make a procmailrc that calls /usr/bin/deliver

:0 c
! tuphill@gmail.com
* ^From example@*
| $DELIVER -d $LOGNAME -m Example

:0 w

In the above, any email from example@someplace will be delivered via dovecot to the Example mailbox, the rest of the mail will be delivered to the Inbox.

I also needed to make some selinux adjustments to allow procmail to run the dovecot-lda command as well as use the sendmail command (which is just exim). Here is my selinux policy for that part.


require {
type dovecot_deliver_exec_t;
type dovecot_t;
type dovecot_var_run_t;
type exim_t;
type exim_log_t;
type exim_spool_t;
type procmail_t;
type procmail_exec_t;
type sendmail_t;
type var_lib_t;

allow procmail_t dovecot_deliver_exec_t:file { open read execute execute_no_trans getattr };
allow procmail_t dovecot_t:unix_stream_socket connectto;
allow procmail_t procmail_exec_t:lnk_file read;
allow procmail_t dovecot_var_run_t:dir search;
allow procmail_t dovecot_var_run_t:sock_file write;

allow procmail_t exim_spool_t:file { read write };

allow procmail_t sendmail_t:process { siginh rlimitinh noatsecure };
allow procmail_t var_lib_t:file read;

allow sendmail_t exim_log_t:dir search;
allow sendmail_t exim_log_t:file open;

allow sendmail_t exim_spool_t:dir { write search read remove_name open getattr add_name };

allow sendmail_t exim_spool_t:file { rename setattr read lock create write getattr open append };
allow sendmail_t exim_t:process { siginh rlimitinh noatsecure };

'cause enforcing is the only way to run...

I have a view of profile data that I want to sort by lastname. The name field currently has "firstname [middlename] lastname" in it. I looked around and found this example of using views php filter to sort based on a query.

I then made a query which ordered nid's by lastname and plugged that into my view as a filter.

SELECT nid,title,SUBSTRING_INDEX(title,' ',-1) AS lastname FROM node WHERE type='profile' ORDER BY lastname;

Running that query returns the nids in lastname order.
So my whole filter is the following:

$sql="select nid,title,substring_index(title,' ',-1) as lastname from node where type='profile' order by lastname;";
$sql = db_rewrite_sql($sql);
$result = db_query($sql);
while ($row = db_fetch_array($result)) {
$node_ids[] = $row['nid'];
return $node_ids;

Our aliases are spammed like any other account, but filtering on them would require making a real account. I wanted to be able to filter aliases without creating accounts for everything. My first solution was to create an account and filter on that one, then using $original_local_part I could forward to a filtered alias. This works but if someone discovers the filtered alias, they can bypass the filtering.

The exim docs suggested that I could have filtering in the aliases but it didn't seem to work and I kept getting this in the logs:

error in redirect data: missing or malformed local part (expected word or "

Looking closer at the docs I found this sentence:

If you are reading the data from a database where newlines cannot be included, you can use the ${sg} expansion item to turn the escape string of your choice into a newline.

Exim isn't interpreting the \n as a newline so it ignores the alias as a filter file. I added the sg to my data part on the alias router and voila, everything works.

data = ${sg{${lookup{$local_part}dbm{/etc/exim/aliases.db}}}{newline}{\n}}

This sg will take any occurence of the word newline in an alias and translate it into a newline. After that when I deliver -d I see the line being translated...

lookup yielded: # Exim filter newline if $sender_address contains proofpoint-pps newline then newline seen finish newline else deliver thomas@example.com newline endif
expanded: # Exim filter
if $sender_address contains proofpoint-pps
seen finish
else deliver thomas@example.com

So aliases can be filtered like real accounts this way, you could use the #Sieve filter syntax too but the #Exim filter syntax works well enough for me.

The leap second has had been a problem for Java apps (389-console for me) and apparently some ruby apps (seems like puppet ((can't prove it)) ). I found the common fix is to just set the date based on the current date as shown here.

Doing this on all the machines, a single line with func.

func \* call command run 'date; date $(date +%m%d%H%M%C%y.%S); date'

The only thing to remember here is that $(something) is better than `something`, since the shell will do that locally first if I used backticks. And of course single quotes are better than doubles...

After running that, all the machines are happy again.

We are using gnarwl for vacation notification and I would like gnarwl to only reply if the current time is in the vacationStart vacationEnd window.

Here is the queryfilter to do that using the following information:

<br />
$recepient           - receiver of the message<br />
$time                - current time in seconds since the epoch<br />
vacationActive       - attribute for start time of the vacation in seconds since the epoch<br />
vacationEnd          - attribute for end time of the vacation in seconds since the epoch<br />
mail<br />
mailAlternateAddress - attribute for email address of an account<br />


queryfilter (&(|(mail=$recepient@*)(mailAlternateAddress=$recepient@*))(vacationActive=TRUE)(|(!(vacationStart=*))(&(vacationStart<=$time)(vacationEnd>=$time))))

Broken down this does the following:


There's a few things going on here.

Check that all these conditions are met (that is the outer and (&).

  • Match either mail or mailAternateAddress on the recepient address
  • Make sure vacationActive is TRUE
  • Check that either vacationStart is not present or
    • vacationStart is less than the current time and
    • vacationEnd is greater than the current time.

I published a Thing on @thingiverse! https://t.co/IYpRyEb7Hz #thingalert Tue Jul 23 19:27:57 +0000 2019

Nokogiri install on MacOSX https://t.co/v3An0miW9L Fri Jul 12 15:06:49 +0000 2019

HTML email with plain mailer plugin on Jenkins https://t.co/Z6FSDMDjy8 Thu Jul 11 21:07:25 +0000 2019

git sparse checkout within Jenkinsfile https://t.co/tcL7V8mzFK Thu Jul 11 20:40:53 +0000 2019

#lfnw node red looks like fun Sun Apr 28 21:02:11 +0000 2019