Blogger Editing via ${EDITOR} - GoogleCL

Google CL allows the use of google tools over command line. This allows amazing things like editing google docs in vim (or emacs, if you're into that sort of thing). There's also blogger and picasa functionality, and you can dump your entire google calendar to stdout (in csv format, so start brushing up on your awk).

One of the things that's blatantly missing, however, is blogger integration with your favorite editor. Hopefully, this post will fix that, as it's doubling as my test of google-cl blogger integration and my blogger script.




timestamp=`date -r "${BLOGFILE}" +%s`

${ed} "${BLOGFILE}"

newtime=`date -r "${BLOGFILE}" +%s`

if [ $? -eq 0 -a "${newtime}" -gt "${timestamp}" ]; then
 title=`head -1 "${BLOGFILE}"`
 /bin/sed -i '1d' "${BLOGFILE}"
 echo 'Posting to blog "fmt >/dev/internets"'
 ${GOOGLE} blogger post --title "${title}" "${@}" "${BLOGFILE}"

rm "${BLOGFILE}"

File inclusion courtesy of :r. Now that makes me happy.



Get the Keys of a Hash in Javascript

I found myself today needing to do something that should be really easy in nearly any language: get a key of a given hash. (In the actual problem, any random key would do.)

After trying all kinds of different jQuery and array hacks, I finally found the solution; sometimes, you spend so much time looking in libraries you forget about the actual language:

var key = (function() { for(foo in obj) { return foo; })();

And that's exactly what I used; since any key is good enough, the first key is the easiest one to grab. Making it an anonymous function means I don't have to muck with break; or the like, and running it immediately means I can save the return value into a variable directly.

You can also use the for(key in Object) construct to get all the keys of any given Object. This is mostly academic, since you could always just use for directly (and without the overhead of a function). It may come in handy for debugging routines, however.

 * Returns the keys of a given object.
 * @param Object obj
 * @return Array
function keys(obj) {
  var ks;
  for (k in obj) { ks.push(k) }
  return ks;



Git Trick: Multiple remotes

git is a wonderful little vcs. It took a while for me to warm up to it, but now I could never go back to svn or cvs. RCS still holds a special place in my heart, and is so damn useful I can't give it up completely. But these days, my vcs world is all git-based.

I'm sure there's a way to do this multiple-remote trick in bzr and hg, but I don't use them, so haven't bothered to figure it out. If I need hg, there's always hg-git, which may earn its own entry soon.

git makes it easy to track multiple remote branches, which is ostensibly used for sharing work between colleagues. i.e. I can 'merge steve/master' to bring all Steve's unpublished changes to master into my current local branch. This is great for hacking in small groups or pairs without needing a central server. However, the true power of this, I think, is that any git url can be a remote.

So, for example, I'm building a site based on dabl, and I want to keep it up to date with the latest changes. Now, up until last week, that meant I would check out dabl, check out my code, and then copy the files in and commit. Something like this:

git clone dabl
git clone my-project
cp -R dabl/{libraries,helpers} my-project/
cd my-project
git commit -a -v -m 'Updated dabl'
git push origin master

Now, of course that `git commit -a` wipes out any local changes. That means that, for example, anything in helpers/ that gets modified in both my-project and dabl will be replaced by the dabl version. Usually, that's fine because I've pushed the changes from my-project upstream. Where there would logically be a conflict, it is happily ignored in favor of the upstream version. Of course, upstream is the same procedure but in the reverse direction, and has the same pitfalls.

The solution is to recognize that these are both git repositories in their own right. Just add the dabl repository to my-project. Updating is as easy as:

git fetch dabl
git merge dabl/master

Now, I have the latest version of my upstream code, with all the benefits of merging it in. This is what I really wanted in the first place. I love cp, but it's never going to give me an octopus merge.

But, that's only one half of the problem. Is there any way pushing upstream could be aided by this?

Okay, it would be pretty anti-climactic if the answer to that were no. As it turns out, pushing upstream is the reverse of pulling from upstream (like with the cp "solution"); you just have to be a little more careful. Lets say I need to push HEAD on master upstream:

git checkout dabl/master
git cherry-pick master
git push dabl HEAD:master

Yes, it really is that easy. "master" is a valid commit-ish, so you can cherry-pick it. You can even do exciting things like master^ and master~3. And of course, you can name a commit by its hash or tag if you want to be really safe. In practice, it generally helps to be really safe.

A word of caution: DO NOT make your other remote call its refs "origin." This will give you and everyone you work with headaches as you will switch repositories instead of merging. And I mean "switch" in the svn sense of the word (i.e. "rebase"). Name it something sane and unique, like the name of the project.



Setting Up a pacman Repository for Archlinux

Last night, I finally got around to recompiling my own vim binary through abs. I wanted X title bar support, and the python interpreter so that one of these days I can set up PHP debugging with x-debug.

Users of emacs can kindly redirect themselves to /dev/null. </religion>

Everything went off without a hitch; package compiled and installed fine, and I got a bonus gvim package to go with it. (On an unrelated note, anybody want a mint condition gvim package with ruby and python support compiled in? Has never even been opened; I don't even know if it works.) The dependencies are a little off (now requires libxt, ruby 1.9, and python 2.6), but I already had everything installed, so it was fine for me.

Then I went through and installed an ftp server on my latest test box and got the package added to the repository. It turns out that the name of the repo is important, which is a double edged sword. On the plus side, you can host multiple repositories on the same server without the huge directory tree required of other distros (*cough*ubuntu*cough*). On the other hand, it means that it took some trial and error to get the pacman.conf entry right:

Server = ftp://hostname.example.com/

This actually downloads ftp://hostname.example.com/repository-name.db.tar.gz, which isn't what I would have expected. It's succinct, but means that you have to know the server's internal name for the repo (as opposed to, say, http://archive.ubuntu.com/ubuntu/dists/lucid/main/).

The other gotcha that wasn't clear from the Arch wiki is that the packages and the db file must live in the same directory. It looks like the following should work:

root@mirror:/srv/ftp# repo-add pkgs/vim-7.2-1-i686.pkg.tar.xz repository.db.tar.gz

and everything appears to work perfectly until you try and actually download repository/vim. The db entry doesn't store the full path, so you can simply move pkgs/*.tar.xz into the root of the ftp server and everything will magically start working.

Finally, pure-ftpd >> GNU inetutils ftpd. It supports IPv6, has rate-limiting, throttling, and chroot() built in, and permits anonymous, password-less logins. The motd at client connect I'm not crazy about, but that doesn't show up in pacman, so I don't much care. IPv6, on the other hand, means not having to deal with silly things like NAT. And we all know how well ftp works with NAT (hint: PASV is a hack that can now go the way of IE6).

So, now I have a native IPv6 arch repository that I can packages I compile from abs into. Let the hacking begin!

For those interested in what I'm up to:

Server = ftp://ftp.tingar.uni.cx/