After upgrading to horde 4.0.8 and imp 5.0.9 a few users had issues reading email. There were some messages missing, or for some users no messages at all.

In dimp/Dynamic the mailbox would fail to load and the message "Error communicating with Server" would be displayed, this was triggered by ajax_error (that's about as far as we got debugging).

In imp the mailbox would show thousands of messages but they would mostly say "Invalid Address" and "Unknown Date" with "No Subject".

Looking around we finally found this bug that shows it's due to bugginess in the UW-Imap server (time for dovecot).

The fix was the following line in imp/config/backends.local.php:

'capability_ignore' => array('ESEARCH')

After that we restarted apache and all was well.

There's probably an easier way to do this, but I just put this in my aliases and it works well enough.
alias fl='(for file in `find .??* * -maxdepth 0 -type d`; do du -hs $file 2>/dev/null; done) |sort -h -k 1'
One of our ldap seconaries was failing to stay in sync with the main server. We kept getting "Consumer failed to replay change" in the error log. The uniqueid and CSN were always the same, so at first I thought it was specific to the record that was being propogated. After a little looking, I found the following post: http://lists.fedoraproject.org/pipermail/389-users/2009-October/010278.html

In our case it was indeed the passwordRetryCount that was not being accepted at the consumer. I used cl-dump to look at the changelog, a new tool to me, very useful, I just used cl-dump -w[passwd] -o/tmp/changelog.dump. I then did a grep to find the parts that were failing using the uniqueid and CSN.

Rich Megginson mentions that you can set passwordIsGlobalPolicy to allow passwordRetryCount to be sent in the update. I elected to for this solution rather than modifying the sync agreement. I think it makes sense for the retry count to increment globally. I then found this in the documentation:

http://docs.redhat.com/docs/en-US/Red_Hat_Directory_Server/8.1/html/Administration_Guide/Managing_Replication-Replicating-Password-Attributes.html

/usr/lib/mozldap/ldapmodify -D "cn=directory manager" -w secret -p 389 -h consumer1.example.com dn: cn=config changetype: modify replace: passwordIsGlobalPolicy passwordIsGlobalPolicy: on

The only thing to note is that some documentation has the last line there as passwordIsGlobalPolicy: 1 which is probably an older way of changing the setting.

After making that change:

19/May/2011:16:48:05 -0400] NSMMReplicationPlugin - agmt="cn=ldap" (ldap:389): Consumer failed to replay change (uniqueid cccccccc-dddddddd-eeeeeeee-ffffffff, CSN 4dd474c2000004720000): DSA is unwilling to perform. Will retry later. [19/May/2011:16:48:08 -0400] agmt="cn=ldap" (ldap:389) - session end: state=0 load=1 sent=3 skipped=0 [19/May/2011:16:48:08 -0400] NSMMReplicationPlugin - agmt="cn=ldap" (ldap:389): Successfully released consumer [19/May/2011:16:48:08 -0400] NSMMReplicationPlugin - agmt="cn=ldap" (ldap:389): Beginning linger on the connection [19/May/2011:16:48:08 -0400] NSMMReplicationPlugin - agmt="cn=ldap" (ldap:389): State: sending_updates -> start_backoff [19/May/2011:16:48:11 -0400] NSMMReplicationPlugin - agmt="cn=ldap" (ldap:389): State: start_backoff -> backoff [19/May/2011:16:48:11 -0400] NSMMReplicationPlugin - agmt="cn=ldap" (ldap:389): Cancelling linger on the connection

And all is happy again.

After putting cyanogen on my nookcolor, I wanted to try out a bluetooth keyboard. I borrowed an Apple keyboard from a friend and started playing. Initially I paired and it looked like it was going to work right out of the box but...

the connection kept breaking and I couldn't type more than a few characters in any app. I looked around and found Bluetooth keyboard easy connect int he market. I installed it and then rebooted as it advised. After that I didn't have to do anything, it just connected to the keyboard and it worked great for about 5 minutes then stopped. It seems really flaky. I read a few posts that said that the apple keyboard isn't just hid and does other weird things, so I got a microsoft 6000 bluetooth keyboard instead. After uninstalling bluetooth keyboard easy connect and rebooting, I paired with the microsoft keyboard. Right away it worked a lot better than the apple keyboard. Good job apple, can't even follow the bluetooth spec. With the Microsoft keyboard I can select icons using the arrow keys. So it does work right out of the box as long as you don't have that apple keyboard...

I had a user come to me saying they couldn't forward X11 from their home institution to us. I watched them logged in and noticed that xauth was complaining it couldn't lock files. I looked a little deeper and it was that xauth creates a temporary file, then hardlinks to .Xauthority. The problem is that this remote system uses CIFS for home directories (weird huh?). I did some looking and found that ssh has a mechanism to take care of this. The man page has an example script that almost worked for me. I changed it a small amount
XAUTHORITY=/tmp/Xauth-$USER
export XAUTHORITY
alias xauth=$HOME/xauth
if read proto cookie && [ -n "$DISPLAY" ]; then
	if [ `echo $DISPLAY | cut -c1-10` = 'localhost:' ]; then
       		# X11UseLocalhost=yes
		echo add `hostname`/unix:`echo $DISPLAY |
		cut -c11-` $proto $cookie
	else
		# X11UseLocalHost=no
		echo add $DISPLAY $proto $cookie
	fi
fi | tee /tmp/ssh.log | xauth -q -
To get this to work properly I made two more changes, I added a script to their home directory called xauth, which just makes sure XAUTHORITY was set and then runs xauth.
#!/bin/sh
 
XAUTHORITY=/tmp/Xauth-mguest20
export XAUTHORITY
exec /usr/bin/xauth $@
To add insult to injury, during the testing of this I had a brain fart and was trying to use $0 instead of $@, lame
<a href="mailto:user@host">user@host</a>: ./xauth add MIT-MAGIC-COOKIE etc
/usr/bin/xauth: (argv):1:  unknown command "./xauth"
:-[

Ok, after that just make sure XAUTHORITY is set properly by the shell (if it is, then the script isn't needed, but just in case...) The user had tcsh (dunno why, religious war I guess, I can't stand csh)

.tcshrc
setenv XAUTHORITY /tmp/Xauth-$USER
alias xauth $HOME/xauth
After that logout, login again and X should start working.
I have a system with megaraid and I needed to add a new logical drive. I wanted to do it without rebooting, so I started looking around. I got the MegaCLI from LSI's website and was dismayed by it's apparent lack of documentation.

I found this page on le-vert.net that explained how to do everything. Phew.

List the drives

[<a href="mailto:root@server">root@server</a> ~]# MegaCli64 -PDlist -a0 |grep -A1 "Enclosure Device"
Enclosure Device ID: 32
Slot Number: 0
--
Enclosure Device ID: 32
Slot Number: 1
--
Enclosure Device ID: 32
Slot Number: 2
--
Enclosure Device ID: 32
Slot Number: 3
--
Enclosure Device ID: 32
Slot Number: 4
--
Enclosure Device ID: 32
Slot Number: 5
--
Enclosure Device ID: 32
Slot Number: 6
--
Enclosure Device ID: 32
Slot Number: 7
</blockquote>

List the logical devices

[<a href="mailto:root@server">root@server</a> ~]# MegaCli64 -LDInfo -Lall -a0
Adapter 0 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name                :
RAID Level          : Primary-5, Secondary-0, RAID Level Qualifier-3
Size                : 544.5 GB
State               : Optimal
Strip Size          : 64 KB
Number Of Drives    : 5
Span Depth          : 1
Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU
Access Policy       : Read/Write
Disk Cache Policy   : Disk's Default
Encryption Type     : None

Create the device using the disk names

[<a href="mailto:root@server">root@server</a> ~]# MegaCli64 -CfgLdAdd -r0 [32:5,32:6,32:7] -a0
 
Adapter 0: Created VD 1
 
Adapter 0: Configured the Adapter!!
 
Exit Code: 0x00
[<a href="mailto:root@server">root@server</a> ~]# dmesg |tail
sdb: Write Protect is off
sdb: Mode Sense: 1f 00 00 08
SCSI device sdb: drive cache: write back
SCSI device sdb: 2927099904 512-byte hdwr sectors (1498675 MB)
sdb: Write Protect is off
sdb: Mode Sense: 1f 00 00 08
SCSI device sdb: drive cache: write back
 sdb: unknown partition table
sd 0:2:1:0: Attached scsi disk sdb
sd 0:2:1:0: Attached scsi generic sg3 type 0
The drive showed up right way, cool. 8-)
After installing certificates on the directory server and enabling ssl, the admin server wouldn't allow us to access certificates. After clicking on "Manage Certificates" on the Tasks tab, we'd get this error: An error has occured - Could not open file (null)

And this error would appear in the logs

[Wed Feb 16 10:41:04 2011] [notice] [client 192.168.0.1] admserv_host_ip_check: ap_get_remote_host could not resolve 192.168.0.1
I found this bug on the subject and saw the clue. The certificate authority for the cert must be installed on the admin server also.

I went to the admin console, under tasks clicked Manage Certificates and saw a completely empty list under CA Certs. My cert was signed by Equifax, so I just went into /etc/pki/tls/certs/ca-bundle.crt and grabbed the text of the Equifax CA and installed the CA and trusted it.

After that Manage Certificates works on the Directory Server. Manage Certificates showing trusted CA's

The admserv_host_ip_check error still occurs so it must be unrelated to this error. I changed the AllowAccess entries in cn=NetscapeRoot, but the errors still happen in the logs...even though everything is working, go figure.

We have a server exporting a filesystem with nfs version 3. rhel5 clients cannot unmount the filesystem and have this error:
[root@client /]# umount /var/spool/mail umount.nfs: server.example.com:/export: not found / mounted or server not reachable
Running the umount with -v shows the problem.
[root@client /]# umount -v /var/mail mount: trying 192.168.0.1 prog 100005 vers 1 prot tcp port 4002 umount.nfs: server.example.com:/export: not found / mounted or server not reachable mount: trying 192.168.0.1 prog 100005 vers 1 prot tcp port 4002 umount.nfs: server.example.com:/export: not found / mounted or server not reachable
The client is trying to talk version 1 with the version 3 server. For some reason the ount steps up to version 3, the umount doesn't. The workaround is to specify vers=3 in the mount line.
[root@client /]# cat /etc/fstab |grep server server.example.com:/export /export nfs vers=3,rw,nosuid,noac,rsize=32768,wsize=32768 0 0 [root@client /]# mount |grep server server.example.com:/export on /export type nfs (rw,nosuid,remount,nfsvers=3,noac,rsize=32768,wsize=32768,addr=192.168.0.1)
After that the umount works fine.
[root@client /]# umount -v /export mount: trying 192.168.0.1 prog 100005 vers 3 prot tcp port 4002 server.example.com:/export umounted
I preface this by saying that I know it's all my fault. I am really impressed by how well Barnes and Noble have built the nookcolor. It's unbrickable, and I've tried (not on purpose) many times now.

I received the nookcolor in early December. I rooted it almost immediately. I found it was a great device. I was on 1.0.0. The update came for 1.0.1 after I bought it (by 2 days I think). I didn't bother with the update, cause I knew I'd have to revert to stock to get that. Then the overclock kernel showed up so I tried it, using clockworkmod rommanager. I didn't realize the overclock was for a 1.0.1 device. My machine was faster but crashed constantly. I decided to unroot and go back to stock 1.0.0, I tried the instructions in this post but ended up with something that wouldn't boot cleanly. I could adb shell into it, but it just kept playing boot animation over and over again.

And so began my journey, I tried to unroot and apply the 1.0.1 update, unrooting failed. I was left with a brick I thought. Then I tried froyo on it, that worked, phew, device saved. From froyo I was able to reinstall clockworkmod and get myself back into recovery console. I made a backup before starting all of this, so I was able to restore from the backup and get myself to the pre-overclocked state. That should be the end of the story, but I'm a geek...

Knowing I could unbrick it easily, I decided to try the 1.0.1 update manually.
In the Sideload_update.zip there is a script that appears to be the instructions on installing the update called META-INF/com/google/android/updater-script.

Fearing nothing, I decided to just follow the script through and do what it says.

The script was completely easy to read:

assert(getprop("ro.product.device") == "zoom2" || getprop("ro.build.product") == "zoom2"); show_progress(0.100000, 6); format("ext2", "/dev/block/mmcblk0p5"); mount("ext2", "/dev/block/mmcblk0p5", "/system"); show_progress(0.800000, 120); package_extract_dir("recovery", "/system"); package_extract_dir("system", "/system"); symlink("dumpstate", "/system/bin/dumpcrash"); symlink("gralloc.omap3.so.1.1.15.3172", "/system/lib/hw/gralloc.omap3.so");

So, I formated /system, extracted recovery and system to /system (minus the recovery kernel, I wanted to keep clockworkmod). The symlink statements are pretty easy to follow, there's a recursive chmod/chgrp as well

set_perm_recursive(0, 0, 0755, 0644, "/system"); set_perm_recursive(0, 2000, 0755, 0755, "/system/bin");
My supposition was that the first 2 parameters were user and group respectively. The next two must be directory and file permissions. These I replicated with:
chown -R 0:0 /system chown -R 0:2000 /system/bin chmod 0755 `find /system -type d` chmod 0644 `find /system -type f` chmod 0755 `find /system/bin
There were a few more in there but it was all fairly straightforward. I didn't put kernel-recovery into /boot/uRecImg because I wanted to keep clockwork (as I said earlier).

After that I rebooted and sadly it didn't work. I had the boot animation from the autonooter and not the stock and it just kept repeating itself (like I'd seen before). I booted again into clockworkmod and on a whim I formatted /data (I have a backup and a titanium backup of the apps). After reformatting /data, it booted fine and it was a stock 1.0.1 device.

Thank you very much Barnes and Noble, this is a fantastic device, completely idiot proof (yes, I'm the idiot), you made alot of good design decisions. After that, autonooter did the right thing and I was able to go back to the 950MHz kernel and it's been running solidly for a day now.

After upgrading from ie6 to ie7 for our crossover pro users, we found that they were all getting the runonce page (http://runonce.msn.com/runonce3.aspx) and couldn't get rid of it. Searching the codeweaver support site didn't help, the only advice given there was to ignore the issue. After looking around a bit I found that the way to remove it is to fool ie into thinking it's already displayed it, using regedit as listed in the comments in this page
I ran regedit using cxrun and added these values to the registry.
[HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Main] "RunOnceHasShown"=dword:00000001 "RunOnceComplete"=dword:00000001

After that I decided to change the default search to google using regedit as well (since changing it doesn't work from crossover)

[HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\SearchScopes\{2FEDD0BC-4D55-413C-8B59-BFE70133A2CB}] "DisplayName"="Google" "URL"="http://www.google.com/search?q={searchTerms}&rls=com.microsoft:{language}&ie={inputEncoding}&oe={outputEncoding}&startIndex={startIndex?}&num=20&startPage={startPage}"
After that, ie7 opens to our specified homepage and the search bar goes to google as it should.

https://t.co/AGeihMALAv configuring grub2 with EFI Fri Sep 13 05:20:01 +0000 2019

I published a Thing on @thingiverse! https://t.co/IYpRyEb7Hz #thingalert Tue Jul 23 19:27:57 +0000 2019

Nokogiri install on MacOSX https://t.co/v3An0miW9L Fri Jul 12 15:06:49 +0000 2019

HTML email with plain mailer plugin on Jenkins https://t.co/Z6FSDMDjy8 Thu Jul 11 21:07:25 +0000 2019

git sparse checkout within Jenkinsfile https://t.co/tcL7V8mzFK Thu Jul 11 20:40:53 +0000 2019