I was trying to update some information in my filer using options and running into a wall, so I thought I'd try filerview. On Mac,Linux, Windows it wouldn't work. I kept getting the same error: Exception: java.lang.ClassNotFoundException: com.netapp.admin.FilerAutosupportConfigUIApplet.class I found this article on the now site that spoke of the issue. The problem is with tls 1.0 being turned on and ssl being enabled on the filer. I didn't even know there was a control panel in the linux install, but you learn something everyday...

On my system, the control panel was /usr/java/jre1.6.0_21/bin/ControlPanel, your mileage may vary. Under the advanced settings, go to security and disable tls 1.0. After that it works great. Nice simple fix.

I had a few requests to get this working on our workstations. I managed to get a working version hacked together. I'm still working on making a proper package, at the moment it's just a binary rpm. I managed to get a working rpm that builds itself properly using a vlc hack. The problem is that the import of gtk in a few python scripts causes rpmbuild to go looking for a DISPLAY, using vlc, I circumvented that obstacle. I also manually resize the png's since our version of nautilus doesn't seem to handle it itself...

I've also replaced the /usr/bin/dropbox binary with a wrapper that uses wget to download the required tar file from dropbox.com, fixing the rest of the python script to work seemed like a waste of time...

There are two problems running dropbox on something as ancient as RHEL5

  1. python 2.6
  2. glib 2.16

Dropbox compiled against more modern versions of glib and gtk2 and wrote their scripts using the modern constructs of python 2.6. So to get dropbox working, you have to backport all the modern stuff. For python this means taking out all the with and closing statements.

glib is a little more complex, the new function of g_async_queue_new_full has to be replaced with g_async_queue_new, but we need to still handle the destroy when the queue is empty...I haven't fixed that yet.

g_strcmp0 isn't available in 2.12, so I just wrote an inline replacement for it.

g_timeout_add_seconds isn't available also, but that one's easy, you just use g_timeout_add and change the 1 to 1000 (for ms).

Here is my patch nautilus-dropbox-0.6.3-puias.patch

Once you have applied this, you'll need the following rpms as specified in my spec file:

BuildRequires: gtk2-devel, glib2-devel, nautilus-devel, libnotify-devel, nautilus-extensions, gnome-vfs2-devel, pygtk2-devel, pygtk2,python-docutils, vnc-server Requires: nautilus, glib2, gtk2, libnotify, gnome-vfs2, pygtk2,python-docutils

You can then start building, let me know if it doesn't compile cleanly or patch properly for you, I'm still working on it...

After all that, the dropbox python script that gets installed in /usr/bin is still broken, it can't download the binary package from dropbox automatically, you'll have to download that manually and unpack it into your home directory. The url is in the script, http://www.dropbox.com/download?plat=%s where plat is the result of the plat() function.

#!/usr/bin/python import sys import platform def plat(): if sys.platform.lower().startswith('linux'): arch = platform.machine() if (arch[0] == 'i' and arch[1].isdigit() and arch[2:4] == '86'): plat = "x86" elif arch == 'x86_64': plat = arch else: FatalVisibleError("Platform not supported") return "lnx.%s" % plat else: FatalVisibleError("Platform not supported") print plat()
for 32-bit, it's going to be lnx.x86, for 64 lnx.x86_64

this is just a tar file, so download it and place it in the users home directory.

that should be it, if you do make; make install in the dropbox directory, and then run "dropbox start -i", it should start up and ask you if you already have an account.

one minor note, the png's provided in the dropbox tar file are 64x64, which means once you install everything and have it running, the emblems in nautilus are huge! The overwrite the icons of the files, I used convert (ImageMagick) to resize them to 16x16 which seems a reasonable size.

This is still very much a work in progress...any comments, questions or suggestions appreciated.

The RPMS

Updated 11Oct2010
SRPM
i386
x86_64

I still have a little work to do on these, but they should install clean, as usual, let me know if they don't...

How to apply the patch:

[uphill@build nautilus-dropbox]$ \rm -r nautilus-dropbox-0.6.3 [uphill@build nautilus-dropbox]$ tar xf nautilus-dropbox-0.6.3.tar.bz2 [uphill@build nautilus-dropbox]$ patch -p0
To list the package groups available in yum, you would use yum grouplist. When you are adding groups to your kickstart file, you use the id of the group not the name, so this *very* simple script lists the groups with their ids.
#!/usr/bin/python
 
import yum
 
yb = yum.YumBase()
yb.doConfigSetup()
yb.doTsSetup()
for grp in yb.comps.groups:
	print "%s (%s)" % (grp.name,grp.groupid)
This is posted in my howto also
In the process of munging our RHEL5 kickstarts to RHEL6, we started getting this error "Partition requires a size specification", the partition had a size of 0 set in the kickstart, with grow.
part pv.2 --size=0 --grow
reading kickstart.py, it seems that the check for size has gone from pd.size = None to not self.size
if not self.size and not self.onPart: raise KickstartValueError, formatErrorMsg(self.lineno, msg="Partition requires a size specification")
The fix was simple enough then, just change size from 0 to 1.
part pv.2 --size=1 --grow
My problem is that I wanted to filter access on a raid array instead of doing it on the switch. I could login to the switch and figure out the linux hba's wwn (world-wide-name), but I thought there must be a way to get it from the linux machine directly.

Here's the steps

  1. determine the hba's scsi bus number, set VENDOR to the vendor of your fibre channel array
    [root@host ~] grep -B1 "Vendor: $VENDOR" /proc/scsi/scsi |grep Host |head Host: scsi3 Channel: 00 Id: 00 Lun: 00 Host: scsi3 Channel: 00 Id: 00 Lun: 01 Host: scsi3 Channel: 00 Id: 00 Lun: 02 Host: scsi3 Channel: 00 Id: 00 Lun: 03
    In this case, my magical number is 3.
  2. find the fc_host directory for the hba
    RHEL5
    [root@host ~] cd /sys/class/scsi_host/host3/device/fc_host:host3
    RHEL6
    [root@host ~] cd /sys/class/scsi_host/host3/device/fc_host/host3/
  3. look in port_name
    [root@host host3]# cat port_name 0x2001002219....4f
Hope that helps...
I had a problem with a filesystem that was full on a vm. The vm's hard drive is just a lun on a fibre channel. So I resized the lun on the raid controller. The devices (/dev/sdx) for the drive noticed the new size after I did the usual scan and partprobe, but the multipath device didn't see the new size and was working off the old size. After some digging I found this page on how to get multipathd to notice the new size. I'll summarise here.

Assuming our hard drive in question is called vm1, /dev/mapper/vm1

[root@server0 ~]# multipath -ll vm1
vm1 (1ACNCorp_FF01000033100008) dm-15 DUMMY,R_dummy_root
[size=56G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
 \_ 3:0:0:0  sdc  8:32   [active][ready]
\_ round-robin 0 [prio=1][enabled]
 \_ 3:0:1:0  sdu  65:64  [active][ready]
This shows that the drives in question are sdc and sdu. When we start, we have 70GB LUNs for the vm, we'll resize to 80GB. After the resize, we don't see a change on our hypervisor, we have to use blockdev to update the partition table in memory.
[root@server0 ~]# fdisk -l /dev/sdc
 
Disk /dev/sdc: 70.0 GB, 70002409472 bytes
255 heads, 63 sectors/track, 8510 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1   *           1          19      152586   83  Linux
/dev/sdc2              20        8510    68203957+  8e  Linux LVM
[root@server0 ~]# blockdev --rereadpt /dev/sdc
[root@server0 ~]# fdisk -l /dev/sdc
 
Disk /dev/sdc: 80.0 GB, 80003851264 bytes
255 heads, 63 sectors/track, 9726 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1   *           1          19      152586   83  Linux
/dev/sdc2              20        8510    68203957+  8e  Linux LVM
[root@server0 ~]# 
Repeat this for the other member of the group then notice that the multipath doesn't have the new size yet.
[root@server0 ~]# fdisk -l /dev/sdu
 
Disk /dev/sdu: 80.0 GB, 80003851264 bytes
255 heads, 63 sectors/track, 9726 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdu1   *           1          19      152586   83  Linux
/dev/sdu2              20        8510    68203957+  8e  Linux LVM
[root@server0 ~]# fdisk -l /dev/mapper/vm
[root@server0 ~]# fdisk -l /dev/mapper/vm1
 
Disk /dev/mapper/vm1: 70.0 GB, 70002409472 bytes
255 heads, 63 sectors/track, 8510 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
           Device Boot      Start         End      Blocks   Id  System
/dev/mapper/vm1p1   *           1          19      152586   83  Linux
/dev/mapper/vm1p2              20        7294    58436437+  8e  Linux LVM
Now remove the drives from the multipath and add them, then resize
[root@server0 ~]# multipathd -k
multipathd> del path sdc
ok
multipathd> add path sdc
ok
multipathd> del path sdu
ok
multipathd> add path sdu
ok
multipathd> resize map vm1
ok
multipathd> 
[root@server0 ~]# fdisk -l /dev/mapper/vm1
 
Disk /dev/mapper/vm1: 80.0 GB, 80003851264 bytes
255 heads, 63 sectors/track, 9726 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
           Device Boot      Start         End      Blocks   Id  System
/dev/mapper/vm1p1   *           1          19      152586   83  Linux
/dev/mapper/vm1p2              20        7294    58436437+  8e  Linux LVM

Next, resize the actual lvm. I got most of this from this page.

  • reboot the vm
  • use fdisk -u, delete the partition, make a new partition that starts on the same sector but extends to the end of the disk
  • partprobe to reread the partition table, or reboot
  • pvresize /dev/hda2
  • pvdisplay to see how much is now free
  • lvextend -l +[number free] /dev/vg/lv
  • lvdisplay (see new size)
  • resize2fs /dev/vg/lv
[root@vm1 ~]# fdisk -l /dev/hda
 
Disk /dev/hda: 70.0 GB, 70002409472 bytes
255 heads, 63 sectors/track, 8510 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/hda1   *           1          19      152586   83  Linux
/dev/hda2              20        8510    68203957+  8e  Linux LVM
[root@vm1 ~]# blockdev --rereadpt /dev/hda
BLKRRPART: Device or resource busy
[root@vm1 ~]# poweroff
 
Broadcast message from root (pts/0) (Thu May 27 16:43:09 2010):
 
The system is going down for system halt NOW!
I believe a reboot is enough to make the vm reread the hard drive file...but I went for poweroff anyway
[root@vm1 ~]# fdisk -l /dev/hda
 
Disk /dev/hda: 80.0 GB, 80003851264 bytes
255 heads, 63 sectors/track, 9726 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/hda1   *           1          19      152586   83  Linux
/dev/hda2              20        8510    68203957+  8e  Linux LVM
[root@vm1 ~]# fdisk -u /dev/hda
 
The number of cylinders for this disk is set to 9726.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)
 
Command (m for help): p
 
Disk /dev/hda: 80.0 GB, 80003851264 bytes
255 heads, 63 sectors/track, 9726 cylinders, total 156257522 sectors
Units = sectors of 1 * 512 = 512 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/hda1   *          63      305234      152586   83  Linux
/dev/hda2          305235   136713149    68203957+  8e  Linux LVM
 
Command (m for help): d
Partition number (1-4): 2
 
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 2
First sector (305235-156257521, default 305235): 
Using default value 305235
Last sector or +size or +sizeM or +sizeK (305235-156257521, default 156257521): 
Using default value 156257521
 
Command (m for help): w
The partition table has been altered!
 
Calling ioctl() to re-read partition table.
 
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root@vm1 ~]# reboot
 
Broadcast message from root (pts/0) (Thu May 27 16:56:29 2010):
 
The system is going down for reboot NOW!
Now the straightforward lvm stuff
[root@vm1 ~]# pvdisplay /dev/hda2
  --- Physical volume ---
  PV Name               /dev/hda2
  VG Name               Example
  PV Size               65.04 GB / not usable 13.24 MB
  Allocatable           yes (but full)
  PE Size (KByte)       32768
  Total PE              2081
  Free PE               0
  Allocated PE          2081
  PV UUID               jUn8V8-Ca29-HlTK-8Rnn-yq1k-2jae-aCfGnH
 
[root@vm1 ~]# pvresize /dev/hda2
  Physical volume "/dev/hda2" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized
[root@vm1 ~]# pvdisplay /dev/hda2
  --- Physical volume ---
  PV Name               /dev/hda2
  VG Name               Example
  PV Size               74.36 GB / not usable 20.39 MB
  Allocatable           yes 
  PE Size (KByte)       32768
  Total PE              2379
  Free PE               298
  Allocated PE          2081
  PV UUID               jUn8V8-Ca29-HlTK-8Rnn-yq1k-2jae-aCfGnH
Free PE is 298, so I can add that to my logical volume.
[root@vm1 ~]# lvdisplay /dev/Example/RootVol 
  --- Logical volume ---
  LV Name                /dev/Example/RootVol
  VG Name                Example
  LV UUID                eawcxY-KVZG-rcdO-Bf5E-Vck3-JOnJ-8KN2lm
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                57.03 GB
  Current LE             1825
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
 
[root@vm1 ~]# lvextend -l +298 /dev/Example/RootVol 
  Extending logical volume RootVol to 66.34 GB
  Logical volume RootVol successfully resized
[root@vm1 ~]# lvdisplay /dev/Example/RootVol 
  --- Logical volume ---
  LV Name                /dev/Example/RootVol
  VG Name                Example
  LV UUID                eawcxY-KVZG-rcdO-Bf5E-Vck3-JOnJ-8KN2lm
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                66.34 GB
  Current LE             2123
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
 
[root@vm1 ~]#
The last step is to resize the ext4 on the logical volume
[root@vm1 ~]# resize2fs /dev/Example/RootVol 
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/Example/RootVol is mounted on /; on-line resizing required
Performing an on-line resize of /dev/Example/RootVol to 17391616 (4k) blocks.
The filesystem on /dev/Example/RootVol is now 17391616 blocks long.
[root@vm1 ~]# df -h /
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/Example-RootVol
                       65G  1.6G   60G   3% /
Done.
While attempting to monitor traffic on a qlogic Sanbox 5602 I noticed that the traffic (connUnitPortStatCountTxObjects) was being returned as a hexadecimal value with spaces in it (e.g. Hex-STRING: 00 00 00 00 01 4F E1 1B). Zabbix was complaining about an invalid type of returned data because of the spaces. I wrote the following simple patch to remove the spaces (I know it could be done simpler but I just needed a quick fix).
diff -up zabbix-1.8.1/src/libs/zbxcommon/misc.c.spaces zabbix-1.8.1/src/libs/zbxcommon/misc.c --- zabbix-1.8.1/src/libs/zbxcommon/misc.c.spaces 2010-03-19 10:57:07.000000000 -0400 +++ zabbix-1.8.1/src/libs/zbxcommon/misc.c 2010-03-22 10:31:36.000000000 -0400 @@ -1322,7 +1322,7 @@ int is_uhex(char *str) for (; '\0' != *str; str++) { - if ((*str '9') && (*str 'f') && (*str 'F')) + if ((*str '9') && (*str 'f') && (*str 'F') && (*str != ' ')) break; res = SUCCEED; diff -up zabbix-1.8.1/src/libs/zbxsysinfo/sysinfo.c.spaces zabbix-1.8.1/src/libs/zbxsysinfo/sysinfo.c --- zabbix-1.8.1/src/libs/zbxsysinfo/sysinfo.c.spaces 2010-03-19 11:03:16.000000000 -0400 +++ zabbix-1.8.1/src/libs/zbxsysinfo/sysinfo.c 2010-03-22 10:23:53.000000000 -0400 @@ -587,6 +587,19 @@ int set_result_type(AGENT_RESULT *result case ITEM_DATA_TYPE_HEXADECIMAL: if (SUCCEED == is_uhex(c)) { + /* remove spaces */ + int hex_i=0,hex_j; + while(c[hex_i]) { + if (c[hex_i] == ' ') { + hex_j = hex_i; + while(c[hex_j]) { + c[hex_j] = c[hex_j+1]; + ++hex_j; + } + } + ++hex_i; + } + ZBX_HEX2UINT64(value_uint64, c); SET_UI64_RESULT(result, value_uint64); ret = SUCCEED;
After patching I was able to monitor the individual port stats (e.g. for the first port .1.3.6.1.3.94.4.5.1.4.16.0.0.192.221.19.34.253.0.0.0.0.0.0.0.0.1)
We often need to keep two directories on different machines synchronised. We would like the rsync to be secure and to only allow the rsync, no shell access. This method uses ssh keys with commands in authorized_keys.

Scenario I

Backup directory /mnt/one from server pris to client directory /home/user/two on client deckard by initiating the the copy from the client deckard. (i.e. send files from the server to the client)

  • create ssh keys using ssh-keygen
    [user@deckard ~]$ cd .ssh [user@deckard .ssh]$ ssh-keygen -t dsa -f deckard Generating public/private dsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in deckard. Your public key has been saved in deckard.pub. The key fingerprint is: 17:f4:69:30:6c:67:5a:73:2e:6f:ba:4b:8b:94:2a:f9 user@deckard.example.com The key's randomart image is: +--[ DSA 1024]----+ | .+ | | .o+=.. | | ..=++ | | .o. . | | S . o | | . . o | | . o .o | | o o o.. | | oE . +o | +-----------------+
  • create a new user for the sync operation and copy the public key you just created to pris
    [root@pris ~]# useradd deckardsync [root@pris ~]# su - deckardsync [deckardsync@pris ~]$ mkdir .ssh [deckardsync@pris ~]$ cd .ssh [deckardsync@pris .ssh]$ scp user@deckard:.ssh/deckard.pub authorized_keys deckard.pub 100% 1194 1.2KB/s 00:00
  • make the /mnt/one directory accessible to deckardsync (or ideally owned by this user)
    [root@pris ~]# chown deckardsync /mnt/one
  • edit authorized_keys and add the commands for rsync into the key
    command="rsync --server --sender -vlogDtprCz . /mnt/one/" ssh-dss AAAAB3NzaC1kc3MAAACBANbyPA4Vkem1tXrBcmkc9+SHeBrgHKbeBdS2MZKMBT/CsPWPSwMFQGg3GzX2KFrIVlZW/+OfkFcrZabMxtLb4CfvFgZsK18hcyYWZobhtpzqfsoolVnWbHdcmxFqyUq9fIK5iPA2UnvLoLRCDuklQNZ+V8o7fiCiPzXw5sqw3weRAAAAFQDKbAINhyt3OzJhP680PqrA9vHNFwAAAIB0mmnu9rfUKnSAH8UV068H28NEaNuIvSzQchvsPpBZmLpN/yr0mUbWdUJtVfFO72fbhQW+gQmEydCoPgehAGCx0g5jcs+0J7nhDlCqCqYAluD/79jJvEr7Tc33u0QTJSEX9My5X6OVtKByGfGPIyeLdhdsM2s70xbXKpfV4j8KpgAAAIEAnbxqsdbxpZ/vKZMJCW4TuHzOk76By5HHHHRb6XIUTsImQmoHrH1T2ioVil6eNp+V02hbYzbs8OuMqj6ne3gLzIyPqIP1OHuusisrLKgtWTC74lZnZ48d9QCyUHI48yZyoISs0HvEvC08LHYqsq1z1ntCHAde4iszE2TAMsYBat4= user@deckard.example.com
    Note: there are no line breaks in the above key... the file has only one line.
  • copy something into /mnt/one on pris
    [root@pris ~]# cd /mnt/one [root@pris one]# cp -a /usr/share/doc/rsync-2.6.8 .
  • start rsync on deckard using the ssh key
    [user@deckard .ssh]$ rsync -e 'ssh -i deckard -l deckardsync' -Cavz pris:/mnt/one /home/user/two/ receiving file list ... done ./ rsync-2.6.8/ rsync-2.6.8/COPYING rsync-2.6.8/README rsync-2.6.8/tech_report.tex sent 98 bytes received 14416 bytes 29028.00 bytes/sec total size is 36507 speedup is 2.52 [user@deckard .ssh]$ ls ~/two rsync-2.6.8

Scenario II

Backup directory /home/user/two from client deckard to server directory /mnt/three on server pris

The steps involved here are essentially the same with only one small change in the authorized_keys, drop the --sender option to rsync (since pris is no longer the sender)

  • create new ssh key for the transfer in this direction.
    [user@deckard .ssh]$ ssh-keygen -t dsa -f pris Generating public/private dsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in pris. Your public key has been saved in pris.pub. The key fingerprint is: 75:67:7a:ca:2f:b2:11:f4:83:50:27:07:50:0b:55:a3 user@deckard.math.ias.edu The key's randomart image is: +--[ DSA 1024]----+ | o+*o= | | o * . | | . E . o | | + + + | | S o + . | | o + | | . o | | .... | | .o .. | +-----------------+
  • copy the key to pris and append it to authorized_keys
    [root@pris .ssh]# scp user@deckard:.ssh/pris.pub . user@deckard's password: pris.pub 100% 617 0.6KB/s 00:00 [root@pris .ssh]# echo -n 'command="rsync --server -vlogDtprCz . /mnt/three" ' >>authorized_keys [root@pris .ssh]# cat pris.pub >>authorized_keys
  • initiate the transfer from deckard, this time acting as the sender not the receiver (flip sender for receiver)
    [user@deckard .ssh]$ rsync -e 'ssh -i pris -l deckardsync' -Cavz /home/user/two/ pris:/mnt/three building file list ... done ./ rsync-2.6.8/ rsync-2.6.8/COPYING rsync-2.6.8/README rsync-2.6.8/tech_report.tex sent 14422 bytes received 98 bytes 29040.00 bytes/sec total size is 36507 speedup is 2.51
Now you just need to put that rsync line in a cronjob and you'll have automatic syncing. (if you do, remember to use the full path for the ssh-keys you generated). The nice thing here is that if the key should be discovered, the only thing the attacker can do is run rsync.
We had a problem where new clients couldn't get their keys signed properly by the puppetmaster. Both the client and the server were in perfect sync with our ntp server. date on both machines returned the expected results. We are running mongrel so I went down the wrong path of thinking apache was to blame for the time problem. It wasn't until I started going through the certificate_factory stuff that I found the problem. We'd errors on the certs like this:
[root@puppet ~]# cd /var/lib/puppet/ssl [root@puppet ssl]# openssl verify > -CAfile ./certs/ca.pem ./certs/client.example.com.pem > ./certs/client.example.com.pem: /CN= client.example.com > error 9 at 1 depth lookup:certificate is not yet valid
Outputing the certificate showed that the cert was being signed for a future date, even though the time on the machines is correct.
[root@puppet ssl]# date Fri Jan 29 11:31:32 EST 2010 [root@puppet ssl]# openssl x509 -text -in ca/signed/client.example.com.pem |grep -A2 Valid Validity Not Before: Feb 17 13:28:04 2010 GMT Not After : Feb 16 13:28:04 2015 GMT
Going through the code I found that the date was being set in certificate_factory.rb
def set_ttl # Make the certificate valid as of yesterday, because # so many people's clocks are out of sync. from = Time.now - (60*60*24) @cert.not_before = from @cert.not_after = from + ttl end
Just for fun I ran the command through interactive ruby (irb) and discovered the source of the problem.
[root@puppet ~]# ntpdate time.example.com 29 Jan 09:02:45 ntpdate[9117]: step time server 192.168.0.1 offset -6377207.794727 sec [root@puppet ~]# irb irb(main):001:0> Time.now => Tue Apr 13 05:25:50 -0400 2010 irb(main):002:0> quit [root@puppet ~]# date Fri Jan 29 08:59:07 EST 2010
I still don't know why this happened, it's not a puppet bug, it's a ruby bug. date was returning the expected results. I checked Timezones, everything, all were good. It was time for a kernel upgrade, so I did the upgrade and rebooted. I haven't seen the problem since :-/ The machine in question is a kvm running on version 88, I know there are some clock skew problems with earlier kvm's but this is not really a skew, it's far in the future...and the date was still being show as correct. So ruby must've been calculating the date wrong somehow, it doesn't really make sense...comments welcome. Anyway, if this happens to you, maybe try irb and see if ruby thinks the date is wrong.
using fedora-ds/redhat-ds it creates cert8.db and key3.db to store the certs. I wanted to extract the private key as PEM so I could import it elsewhere.
[root@ldap] cd /etc/dirsrv/slapd-ldap [root@ldap] pk12util -o cert.p12 -n 'server-cert' -d . Enter Password or Pin for "NSS Certificate DB": Enter password for PKCS12 file: Re-enter password: pk12util: PKCS12 EXPORT SUCCESSFUL [root@ldap] openssl pkcs12 -in cert.p12 -out cert.pem -nodes -clcerts Enter Import Password: MAC verified OK [root@ldap] cat cert.pem Bag Attributes friendlyName: server-cert localKeyID: 10 F4 C2 F6 01 3C 66 AA 72 35 C9 A7 DA B9 12 3F 11 A1 98 F6 Key Attributes: -----BEGIN PRIVATE KEY----- MIICdwIBADANBgkqhkiG9w0BAQEFAASCAmEwggJdAgEAAoGBALA7rSWdSk4CVHef ... BnevX/uQwZ3L1Qo= -----END PRIVATE KEY----- Bag Attributes friendlyName: server-cert localKeyID: 10 DD CC EE BB 3C 33 AC 72 35 C9 A7 DA B9 12 3F 11 A1 98 F6 subject=/C=US/ST=Any State/L=Any Town/O=Example/CN=ldap.example.com issuer=/C=US/ST=Any State/L=Any Town/O=Example/CN=certmaster.example.com -----BEGIN CERTIFICATE----- ChMcSW5zdGl0dXRlIGZvciBBZHZhbmNlZCBTdHVkeTEeMBwGA1UECxMVU2Nob29s ... gIP23WbaOw4DygMwXfbJwF5K0xxv+NALlpoaZw== -----END CERTIFICATE-----
I couldn't figure out how to do it with pk12util and certutil alone, the key was using openssl after exporting with pk12util...

I published a Thing on @thingiverse! https://t.co/IYpRyEb7Hz #thingalert Tue Jul 23 19:27:57 +0000 2019

Nokogiri install on MacOSX https://t.co/v3An0miW9L Fri Jul 12 15:06:49 +0000 2019

HTML email with plain mailer plugin on Jenkins https://t.co/Z6FSDMDjy8 Thu Jul 11 21:07:25 +0000 2019

git sparse checkout within Jenkinsfile https://t.co/tcL7V8mzFK Thu Jul 11 20:40:53 +0000 2019

#lfnw node red looks like fun Sun Apr 28 21:02:11 +0000 2019