Haskell, It Is

I've decided that I will start functional programming with Haskell. I've delved deep enough into the Learn You a Haskell for Great Good! tutorial to really turn back. In comparison, the available "starting" Erlang documentation seems daunting.

It also helps that Real World Haskell, published by O'Reilly, has gotten rave reviews and won the 2008 Jolt Product Excellence Award. (Why this isn't on O'Reilly's product page for the book, I don't know.) Remember what I wrote about books?

This is not to say I've written off the other languages I was looking at. Erlang, Scheme, and Scala are definitely leaders for the next language. In fact, before starting with Haskell, I want to take a short detour and read The Little Schemer first. I've heard good things about the book and since I'm about to delve into functional programming, I want to read it. The worst thing that can happen is it turns out to be a waste of time and I'm sure it won't be.

(Edit: I just noticed that the title of this article was incorrect. This has been corrected. Sorry for the issues.)

Whither Scala?

It's April and that means that I should look at the new programming language for the year soon. Thus far, my leading candidates for the next one were all functional languages (in particular, Erlang, Haskell, or Scheme). Then, yesterday, through Slashdot, I read this interview with the Twitter team discussing replacing some of their Ruby backend stuff with Scala.

I'd heard of Scala before. Some time ago I ran across Ricky Clarkson's post explaining what (0/:l)(_+_) means. (For those of you who don't want to read the post, the Ruby equivalent is something like l.inject( 0 ) { |x,y| x + y }.) A little poking around lead me to David Pollak's post comparing Ruby on Rails to his Scala-based Lift framework. (However, it's been over a year since then and I think that Rails has made some improvements but I have no hard data or anything really beyond my respect for the Rails developers since I think they're pretty smart people.)

I have been going through the online tour of Scala periodically and some of the features look interesting. The major issues I have currently are the use of the JVM and the typing overhead of static typing but both are minor. It definitely has allure.

As I think about it more, I am not sure that Scala is the correct way to approach functional programming for the first time. In his comparison of Scala and Erlang, Yariv Sadan calls Scala "an OO/FP hybrid." This certainly seems to be true. And while being a hybrid is not necessarily a bad thing, it may not be conducive to properly learning functional programming. It's like learning to swim for the first time. If you're in the shallow area of the pool, learning to properly swim isn't that big a deal because, if something goes wrong, you can find the bottom of the pool and stand up. If you're in the deep end of the pool, you'd better know how to swim or find a way to get out of the deep end real fast.

The proper thing to do then might be to learn a proper functional language (Erlang, Haskell, and Scheme all have their fair share of proponents) and use those in their own right for a while. And then, next year, having some of that understanding and practice, come back and look at Scala and see how to merge OO with FP using Scala.

Why I'm Moving Away From Perl

When doing system scripting, I historically tend to settle on Perl. This is a habit I am trying to break.

Perl is quick and dirty and gets the job done. And the scripts I've written in Perl certainly look like it. I looked at a Perl script I wrote about six years ago and... I have no idea what it's doing! I'm sure I could figure it out if I spent a few minutes going through it (including blank lines, it's only 125 lines long) but that's beyond the point. I am reminded of this joke comparing Perl and Python.

I have some Python scripts already in use. Perl's object model is painful to me and I have a hard time making use of it. So where I've decided to use objects because it makes more sense that way, I have used Python. And I find that the scripts using Python have a little more readability when read in the future. (To verify this, I looked at a script I wrote over seven years ago.)

The vast amount of existing scripts in Perl require me to continue to use it in the near term. However, as I touch the scripts less and less, the less I remember about them and, therefore, the higher the cost if I need to make changes to them for any reason. And since I would like to minimize the amount of time spent trying to reunderstand code, I think moving away from Perl makes the most sense.

What I haven't decided on is what language to use in the future. This will require further testing and gaining more experience with other languages in doing the sort of tasks I use Perl for. However, I will eventually have that experience so Perl's days are numbered.

Wikis, ./configure, and make

When installing packages from source on *nix servers, I find I need to compile the software identically across several machines. (I do realize that this would be easier if I built a single package and then installed that using the OS/distribution package management tools but that's for another day.) Or sometimes I need to upgrade a package and keep the configuration information the same.

On my personal webserver (which runs this site), I have a directory that contains scripts that include the ./configure command (or the equivalent for the package) and the options for that command. Whenever I need to rebuild a package that has such a script, I just need to run the script.

However, there are some packages that lack such scripts. In these cases, I have to go to the old source directory, check config.log to find how I built it last and then use the same options in the new directory. (Or the same directory if I am reconfiguring the same version of the package.) I find that due to a feature of config.log and either screen or my SSH client that I have to retype parts of the ./configure command-line because it has wrapped around and the wrapping isn't being honored.

The latter is clearly non-optimal. The former is better but is limited to a single machine. If I need to build a package in a like manner on a different machine, I either have to copy the script or recreate it on the new one. Either way, this process is somewhat cumbersome.

An improvement I am trying is documenting the build process used for all software on a wiki. This provides a central (and version controlled) repository for the information that I can refer to independently of the SSH session. It does require a browser but this is not an issue in today's world of large screens and the movement towards using multiple monitors. (If screen real-estate is at a premium or you're not at a workstation, most wikis render well in lynx. If using ikiwiki, you could even check out the repository and view it with less.) Also, if multiple machines have the same software installed but require slightly different configurations, you can make the note in the wiki entry so you can easily see the differences. Further, since wikis implement revisions, you can see how the options have changed over time.

The challenge is making sure that all packages get documented correctly. However, once in the habit of documenting the process, it should become easier to make sure that future packages are documented as well.

Implementing A Distributed Wiki

In my last post, I mentioned wanting to set up a personal wiki. I thought about it more this morning and realized a standard, centralized wiki would not work as well as I had originally thought. In order to properly use a wiki, I need to have access to it offline on my laptop.

Doing a Google search for "distributed wiki" returned this post which mentioned ikiwiki. ikiwiki can use a version control system as its backend for articles. Using its support for git provides for a distributed wiki (in the same sense of "distributed" as git is a "distributed" version control system). There are even instructions on how to do this.

I have ikiwiki set up on the webserver now. I encountered some issues with the web update functionality because I do not allow setuid scripts to run on the partition with the websites. This has resulted in some non-optimal permissions but I hope to fix this in the near future. (Or, at the least, when I rebuild my webserver.)

The one thing I don't like about ikiwiki is needing to rerun the setup script every time I want to change something. However, this shouldn't be a problem once I'm done tinkering with the configuration. I suppose I'm just spoiled from using a lot of PHP web applications (like Drupal and phpBB), and perhaps interpreted languages in general, that having to run a command to "make" the application seems cumbersome.

I'm still working out the logistics of the laptop-side of this. I'm tempted to run the laptop's copy of the wiki repository on a USB flash drive so that way I can even access the files on other machines entirely. (Running git might be hard but text editing should still work.) The issues with this are finding a format that all OS X, Windows, and Linux like that is not FAT32. With the appropriate software, ext3fs may be the winner but this is another topic for another day.

Edit, 03 March 2009: After some thought, I am concerned that the title is misleading. So if you've come here looking for quick instructions on how to set up a distributed wiki:

  1. Download ikiwiki.
  2. Install ikiwiki.
  3. Setup ikiwiki.
  4. Set up a git repository with ikiwiki-makerepo. (There are directions on how to do this near the end of the "by hand" setup instructions.)
  5. Configure ikiwiki to use the git repository.
  6. Test ikiwiki.
  7. Repeat the above for all other machines you may want to run the wiki on. Note that there may be special configuration needed on machines that may be offline, e.g. laptops.
  8. Enjoy your new wiki.


Subscribe to Ithiriel RSS