Functional Programming in C++ Part 1

Introduction

C++ is known to be a multi-paradigmed language. This is often construed
to mean you can program in both a procedural and and object oriented
style. This is too limiting a view. C++, particularly with modern
library support, is more than capable of supporting programming in
the functional style. Even without modern libraries like boost, or
std::tr1, the Standard Template Library embodies many functional
concepts.

It also turns out that the template system in C++ defines a
metaprogramming language that is a pure, non-strict, lazy, functional
language. Of course, since this was not by design, it is not a
particularly elegant language, as anyone who has had to do template
metaprogramming can attest. For right now, though, I’m going to stay
away from metaprogramming, and look just at what can be done in the
language with the facilities provided.

What is functional programming

At the base, it is performing computations by the pure application of
functions, with no side-effects. Of course C has allowed this since
its inception, and it is never considered a functional language.
There is a lot of debate, and at least a little acrimony, in
determining if a language is functional. There are certainly many
claims of ‘my language is more functional that yours.’

Languages that are widely regarded as pure functional languages are Haskell and
ML. Haskell is also getting a lot of alpha geek buzz, for a number of
reasons, and has finally hit the stage where people are learning it
in order to work on something other than a Haskell compiler.

The flavor of ML most commonly encountered is O’caml. The optimizing
compilers for the language are quite powerful, and programs often
compete, and beat, equivalent C programs in efficiency and
performance.

The LISP family has deep functional roots, with Scheme being the version
that focuses the most on functional purity, as well as simplicity and
ease of teaching. Common Lisp has a much broader range of libraries
and facilties defined for it and standardised for it.

JavaScript, Ruby and Python also all have a strong functional component to them.

Examining these languages leads to the observation that the real keys seem to
be

  1. Functions as first class objects
  2. Referential transparency of expressions
  3. Partial application, a.k.a currying.
  4. Unnamed functions, a.k.a lambda expressions

All of these features have been implemented in C++, in the language
itself, in the Standard Library, in TR1, in Boost, or in custom
libraries such as FC++. This article will examine these in more
depth, and show how they can be profitably incorporated into your
existing practice.

First let’s examine what the four cornerstones of functional programming
really mean.

Functions as first class objects means that functions can be used in the same
way as all other entities in the language. In particular, that they
can be passed as values to functions, returned as values from
functions, and new ones can be created at runtime. In C, function
pointers can be passed and returned as values, but creating new
functions is not (portably) possible. In C++, however, we can create
objects that overload operator(), that satisfy all of the desired
properties. Function pointers can also be treated much more generally
with std::tr1::function objects.

Referential transparency of expressions means that an expression, or any of its
sub-expressions can be replaced by the value of that expression or
sub-expression without changing the behavior of the program as a
whole. A computation will always return the same result, and will
only depend on the value of its arguments. This implies that there
are no side-effects to evaluating an expression. No globals that can
be changed, producing non-local changes elsewhere. This property
gives the programmer, and the compiler and runtime, strong
capabilities to reason about the program, optimize it, and
demonstrate its correctness.

Partial application is one of the most unusual, and most powerful, features
of functional programming. This is also known as currying, after
Haskell Curry, a mathematician in the mid 20th Century who
did some of the fundamental work in combinatory logic and the lambda
calculus. Currying takes a function of N parameters and turns it into
a function of N-1 parameters by fixing the value of one of the
formers arguments. Or, another way of looking at it, is to say that a
function that takes N arguments is equivalent to a chain of N
functions, each taking 1 argument and returning a function that
consumes the next argument. Haskell Curry invented it as a way of
talking about multi-variate functions in a theory of functions of one
variable. Well, Shoenfinkel invented it first, but shoenfinkling
never really caught on.

Unnamed functions don’t sound very useful. However, combined with
higher-order functions, that is functions that take and return
functions as parameters, they become very powerful, and quite useful.
If you’ve ever wanted to supply a simple function inline in a one of
the STL algorithms, but were frustrated with the syntax:

compose2(

plus(),
compose2(multiplies(),

bind2nd(plus(),0.0),

bind2nd(plus(),0.0)),
bind2nd(plus(),1))

you’ve wanted a better way of writing lambda expressions. Lambda calculus is one of the fundamental theories of computation, and while I’m temptedto start a lengthy exposition, I think a few of quick examples willgive you the flavor.

(Haskell)

\x -> x*x + x + 1

(Lisp)

(lambda (k) (+(+(* k k) k) 1) )

(Boost.Lambda (c++))

_1 * _1 + _1 + 1

The key is that you can provide a definition of a function and then bind
it to a variable, rather than a fixed name. Ideally with nearly the
same syntax as you would use if you did name the function. Functional
languages are often defined in terms of lambda expressions and the
lambda calculus, with syntactic sugar sprinkled on top to make it
more palatable. I’m using syntactic sugar in the techincal sense of a
well defined transformation from one syntactic form to a more
primitive syntactic form. The thing to keep in mind is that too much
syntactic sugar causes truth decay.

Objects vs Values

Just like an int” has been a mainstay of C++ programming since the very
beginning. The phrase indicates that you can create a new type with
all of the capabilities of a built-in type. What many programmers
don’t realize is that the behavior of ints is a polar opposite of the
behavior expected of objects. That is because ints are quintessential
value types. And whereas an object is an entity with state and
identity, values have neither. This 7 here is indistinguishable from
that 7 over there. And 7 never changes (unless you;re using an old
version of FORTRAN) into 8. You evaluate an expression such as 7+1,
and produce an 8, but the original 7 is unaffected. Objects are
different. This object is always distinct from that object, even if
they have the same state. You can change Bob, and everyone who has a
reference to Bob sees the change also. Unfortunately, the mechanism
by which you produce new value types in C++ is exactly the same as
how you produce new object types. You declare a new class.

What you permit and accept by default with that class determines whether
instances of the class behave like values or like objects. The key
pieces of machinery in the language are familiar to all C++
programmers: default construction, copy construction, copy assignment
and destruction. The compiler will provide ones that by default act
(mostly) like a value. Mostly, because of the shallow vs deep issue,
and the fact that pointers are values, even if what they point to
isn’t. So what all the compiler instantiated methods do is treat each
member of the class as a value, and then create, copy, assign or
destroy it, in the order they are declared. Unfortunately, for the
programmer concerned with efficiency, and almost all C++ programmers
are, these copies are, or can be very expensive.

Objects, on the other hand, typically either do not accept the default
implementations, or they avoid invoking them. Usually when you say
object2 = object1, you want them to continue to refer to the same
entity. In fact, assignment is fraught with danger, with potential
slicing when assigning a derived to a base. Programmers will either
use a Handle/Body or pimpl_ pattern to separate the implementation
side of the class from the reference side of it, or use a smart
pointer such as std::tr1::shared_ptr. In the evil past, some
programmers would attempt to use raw pointers, and manage the
allocated resources by hand. In practice, it turns out that no one is
smart enough to do so successfully in any program of interesting
size. Or at the very least, if the original programmer is that smart,
the next person to attempt to enhance the system, or fix a bug, is
not that smart.

Lazy functional programming languages, such as Haskell, avoid the price of
copying values by only evaluating an expression when it is needed.
This is guaranteed to be correct because of referential transparency.
Although it can lead to puzzling runtime behavior. I’ve coded some
benchmarks which seem to take practically no time, because it turns
out that the calculation is never fully evaluated. This takes not
paying for what you don’t use to the n-th degree. This call-by-need
behavior is why they are characterized as lazy.

Getting this behavior in C++ is tricky, but far from unheard of. One
technique, used in numerical libraries such as Blitz++, and pioneered
by Todd Veldhuizen, is known as expression templates.

The problem expression templates were introduced to solve is the creation
of unneeded temporaries. Take for example the code

Array A,B,C,D;

//
… stuff

D = A * B + C

Where each of A, B, C and D are two-d arrays, with the usual definitions
for multiplication and addition. The straightforward implementation
of operators *() and +() lead to the creation and destruction of
temporaries, with associated memory allocation and destruction. e.g.

Array operator *(const Array& lhs, const Array& rhs);

has to create and return a new array, which might be quite large, only to
have it used temporarily in Array operator +(const Array&, const
Array&) and then discarded. And even once the expression on the
right hand side of the assignment statement is evaluated, it has to
be passed into the copy assignment operator and very likely discarded
afterwards. (note that this is not an initialization, so that
permitted optimization does not apply).

Template expressions change the operators to return something else, an
expression that encapsulates the arguments, and evaluates them only
when necessary.

ArrayExpression operator +(const Array&, const Array&);

Where ArrayExpression looks something like

class
ArrayExpression {

const Array& lhs;

const Array& rhs;

//….
more stuff

double operator(int,int); //compute the value at the point

}

Now, for lazy languages, like Haskell, this can be built into the language
itself, and in this case the concept is called a ‘thunk’. Because by
the time you need to think about it, the thought has already been
thunk. Yes, it’s a really awful joke. You have to remember, we work
in an industry where an extremely popular language is named after an
obscure, although influential, comedy troupe. Other industries do not
consider understanding why Norwegian Blue Parrots are funny a job
qualification. Nor do most of our managers, unless they came up
through the ranks themselves.

Referential transparency makes thunks extremely powerful because not only can
they be passed around instead of fully evaluated values, once they
are fully evaluated, the system can remember that value and use it
from then on. This is very much like common sub-expression
elimination, only it is a runtime feature, as well as a compile time
optimization.

Monads, REST and C++ Template Metaprogramming

OK, with that title, I’m sure to please almost no one. If you want to know how to do REST-ful programming in Haskell, or tie REST to C++, move along, there’s nothing to see here. The connection isn’t at the implementation level. Which is the whole point.
So what do REST, Monads and (successful) Template Metaprogramming have in common?
Strict maintenance of levels of abstraction, and extremely narrow interfaces between those levels.
What does that get you?
Extremely (and possibly maximally) reusable abstractions (code).
What triggered the realization for me was a paper on Arrows by Hughes, which asked the question ‘Why are Monads so reusable when the have such a small interface’, although not in those exact words. It dawned on me that they are so reusable exactly because they have such a narrow, but still very well defined, interface. They can do that because Monads are all about abstracting out features. A particular Monad will delagate all the real work down to the type it is being run over. Its real job is constraining the ways the values of the types it is a Monad over can be combined.
The analogue I have in mind for C++ MP is the swap() method. In a C++ template, we can assume that we can call swap on two values, that we otherwise know nothing about, and that the two values will be exchanged in a way that does not throw any exceptions, and is the best way of exchanging two values of that type. The abstract notion of swapping two values is delegated to the concrete implementation of the type being templated over.
A REST-ful web application exposes PUT, GET, POST, and DELETE on resources named by a URI. Nominally not much to have a really rich experience. OTOH, Google Map mashups contradict that.
It’s not the richness of the interface of the metalevel, it’s the richness allowed at level below. The narrow, but well defined, interface, where that interface delegates to a lower level abstraction, is exactly what enables composeability.
Monads, being firmly grounded in Functors in Category Theory, take this to an extreme level of rigor. There are laws that must be satisfied for a thing to be called a Monad, and those laws ensure that other Monads can interoperate, and achieve the Principle of Least Surprise.
I’m not intending this to be Yet Another Monad Tutorial. So I won’t go too deep here. But monad transformers show a degree of generic reusability that is awfully hard to achieve with C++ templates, even though C++ templates have similar expressive power (with a horribly clumsy syntax).
Monads are functions (functors) that take types to types, and so correspond to mpl metafunctions. But they preserve the underlying structure on the types they operate over. In particular, composition is preserved. So if I have an function of type a -> M b, and a function b -> c, I get a working a -> M c for free, just by using fmap. In C++ it would be like
template foo(A)
and
C bar(B)
getting you to
template foo(A) by saying template fmap(a,b), or even better just fmap(a,b) and letting the compiler figure out types.
(where foo is something like
template foo(A a);)
That looks so impossible in C++, that most C++ programmers have no intuition about it.
But every time you wish that you didn’t have to write some type that the compiler already knows, like the iterator or value_type or return_value, particularly when they get really messy, you’re looking for what Monads, or at least what Hindley-Milner type inference, provide.

Types and Programming Languages: Chapter 4

I’m working through Types and Programming Languages, by Benjamin Pierce.

I’m up to somewhere around chapter 13, References, but it’s starting not to make sense. Which means it’s time to back up and do more of the work, instead of just nodding as though I really understand it.

One of the things he does is build typecheckers for the languages he describes, in the language ocaml, or Objective Caml, a version of the language ML. For a variety of reasons, I’m trying to implement in Haskell. Mostly to give me something concrete to work on that isn’t too large in scope to learn the language a little bit. aSomething more than ‘Hello, World!’, strip_spaces, or traverse a linked list, but less than a ‘real application’.

For those not familiar, a typechecker is somewhere between a compiler and a grqammar driven parser. A type checker inspects what you give it for type correctness and consitency. It makes sure that you don’t assign a Foo to a Bar, unless there’s a rule in the typesystem that says you can. It may, in the process of typechecking, do some steps that a compiler also does, like reduce expressions, but it does this in the interest of determining what the type, not the value, is. Of course, if I were writing a compiler, it would make sense not to throw away that information, and do a bit of both at once.

That does lead me to a bit of a sub-rant. The first step in the process I’m working on is parsing the textual expression. Which means using a parsing library. (Pierce does, so it isn’t cheating) Haskell has two; happy, a yacc analogue, and parsec, a ‘monadic parser combinator’ library. Since the point of doing this in Haskell is to get a better idea what phrases like ‘monadic parser combinator’ libraries mean, I was a bit biased towards Parsec. I already know and loathe yacc.

So I start in on the documentation. At least it has some. That’s a nice benfit of the fact that Haskell grew up in the acadamic community. They need to publish or perish, and the publication serves as documentation. Somewhat, sort of. Although Parsec doesn’t really suffer in that respect. The docs are pretty clear. They just suffer from the same problem that all parser lib docs do. They want to show that you can implement an entire compiler inside the parser.

And that’s usually a bad idea.

The docs show you how you can attach interesting semantic actions to events in the parser, like evaluating the expression that’s currently being parsed. However, in practice, that’s hardly ever what you want to do. You want the parse to return something that abstracts the textual expression into a data structure, usually some kind of abstract syntax tree, where the nodes in the tree reflect elements in the grammar. Then you can write other stuff that accepts the AST and processes it in interesting ways, like compiling or typechecking it.

That’s certainly what Pierce’s code does in ML. And I’m trying to avoid being too inventive.

In any case, it turned out to be pretty trivial to return an AST from a Parsec parser, and in fact, all the examples of real parsers that come with Parsec take that approach. Which gave me some comfort about being on the right track.

Now, the arith language that we’re starting with is pretty primitive. It has booleans and non-negative integers, aka natural numbers . And the latter are all of the form successor 0′ or ‘successor successor 0’, meaning 1 and 2, respectively. Not a real language, but a place to start. Complicated enough that a few theorems could be proved non-trivially, but not so complicated you couldn’t easily work everything by hand.

The language’s syntax can be described with the following grammar


t ::=
true
false
if t then t else t
0
succ t
pred t
isZero t

This tranlates to the Haskell datatype ArithExpr:


data ArithExpr
= TmTrue
| TmFalse
| TmIfExpr ArithExpr ArithExpr ArithExpr
| TmZero
| TmSucc ArithExpr
| TmPred ArithExpr
| TmIsZero ArithExpr
deriving (Show, Eq)

Note that’s almost a mechanical transliteration, and that’s exactly what I’m aiming for. I’m a firm believer in the rule that there are two kinds of source code, that which obviously has no defects, and that which has no obvious defects.

So now we need a parser that will return those terms. In Parsec, that looks like this:


arithExpr :: Parser ArithExpr
arithExpr =
trueExpression
falseExpression
ifExpression
zeroExpression
succExpression
predExpression
isZeroExpression
parens arithExpr
"expression"

So the arithExpr is a parser of ArithExpr, and the parser returns either a trueExpression, falseExpression, ifExpression, etc.

Those look like:


trueExpression =
do{ reserved "true"
; return TmTrue
}

falseExpression =
do{ reserved "false"
; return TmFalse
}

ifExpression :: Parser ArithExpr
ifExpression =
do{ reserved "if"
; condition <- arithExpr
; reserved "then"
; thenClause <- arithExpr
; reserved "else"
; elseClause <- arithExpr
; return (TmIfExpr condition thenClause elseClause)
}

Hopefully, even if the Haskell syntax is unusual and unfamiliar, the intent is pretty clear. true, false, if, then, else are parsed as reserved words, and true and false just return a TmTrue and TmFalse respectively. The ifExpression is a bit more interesting. It parses if, then it looks for an arithExpression, a then followed by another arithExpr, and finally an else followed by a third arithExpr, and returns the three of them wrapped up in a TmIfExpr.

So with those out of the way, we can look at how to evaluate the expression returned by the parser, testing them against a set of evaluation rules. For this small language, there are only a few, in two sets.

For Booleans we have
terms


t ::=
true
false
if t then t else t

values


v ::=
true
false

evaluation rules


if true then t2 else t3 -> t2

if false then t2 else t3 -> t3

t1 -> t1'
----------
if t1 then t2 else t3 -> if t1' then t2 else t3

Then arithmetic is added (the terms from above are elided)

Arithmetic Expressions

new terms



t ::= ...
0
succ t
pred t
iszero t

new values



v ::= ...
nv

nv ::=
0
succ nv

new evaluation rules


t1 -> t1'
---------
succ t1 -> succ t1'

pred 0 -> 0

pred (succ nv1) -> nv1

t1 -> t1'
--------
pred t1 -> pred t1'

iszero 0 -> true

iszero (succ nv1) -> false

t1 -> t1'
--------
iszero t1 -> iszero t1'

So now we want to transliterate those evaluation rules into a single step evaluator. Now, Pierce writes his in such a way that it throws an exception when no rule matches. I haven’t figured out exceptions in Haskell, and in any case he does note in a footnote that they really aren’t considered good style in ML in the way that he uses them, to terminate a recursive functions.

Instead, I’m choosing to represent the eval function as possibly returning a value, and in Haskell, that’s returning a Maybe. So the signature of my eval1 is


eval1 :: ArithExpr -> Maybe ArithExpr

and the evaluation rules can be written like so


eval1 (TmIfExpr TmTrue t _) = Just t

eval1 (TmIfExpr TmFalse _ t) = Just t

eval1 (TmIfExpr t1 t2 t3) =
let t1' = eval1 t1
in case t1' of
{ Just t1'' -> Just $ TmIfExpr t1'' t2 t3
; Nothing -> Nothing
}

That is, if we’re eval’ing an if expression, and the first clause is true or false, we can reduce it to either the then clause or the else clause immediately. On the other hand, if it doesn’t match those, we can instead evaluate the first term, and return an if expression with the first term evaluated. If the first term doesn’t evaluate, then we return nothing. I’m taking advantage of pattern matching in Haskell to select the right function based on the details of the argument supplied. They all take a single argument, but I’m asking to distinguish what constructor was used to create that argument. The ‘_’ character means that I don’t are what the type or value of that part of the argument is, match anything.

This is the way I wrote it at first, at least. I ran this by the haskell-cafe mailling list, and recieved some suggestions that I wasn’t taking good advantage of Maybe being a monad, and that it might make more sense to do the sub-eval in a do block. In particular the advice in Nomaware’s All About Monads tutorial is exactly on point. Those Nothings and Justs don’t need to be there.

I’ll be reworking the code to adopt those suggestions before going much further.

To do the full evaluation, we run the eval1 until we can’t anymore. That’s


eval t =
let t' = eval1 t
in case t' of
{ Just t'' -> eval t''
; Nothing -> t
}


Here's all the code so far:
ArithParser.hs

module ArithParser ( parseArith
, parseArithFromFile
, arithExpr
, ParseError
) where

import Char
import Monad
import Arith

-- Parsec
import Text.ParserCombinators.Parsec
import Text.ParserCombinators.Parsec.Expr
import qualified Text.ParserCombinators.Parsec.Token as P
import Text.ParserCombinators.Parsec.Language (haskellStyle)


parseArithFromFile :: String -> IO (Either ParseError ArithExpr)
parseArithFromFile fname =
parseFromFile arithExpr fname

parseArith sourceName source =
parse arithExpr sourceName source

arithExpr :: Parser ArithExpr
arithExpr =
trueExpression
falseExpression
<|> ifExpression
<|> zeroExpression
<|> succExpression
<|> predExpression
<|> isZeroExpression
<|> parens arithExpr
"expression"


trueExpression =
do{ reserved "true"
; return TmTrue
}

falseExpression =
do{ reserved "false"
; return TmFalse
}

zeroExpression :: Parser ArithExpr
zeroExpression =
do{ reserved "0"
; return TmZero
}

ifExpression :: Parser ArithExpr
ifExpression =
do{ reserved "if"
; condition <- arithExpr
; reserved "then"
; thenClause <- arithExpr
; reserved "else"
; elseClause <- arithExpr
; return (TmIfExpr condition thenClause elseClause)
}

succExpression =
do{ reserved "succ"
; expr <- arithExpr
; return (TmSucc expr)
}

predExpression =
do{ reserved "pred"
; expr <- arithExpr
; return (TmPred expr)
}


isZeroExpression =
do{ reserved "iszero"
; expr <- arithExpr
; return (TmIsZero expr)
}



-----------------------------------------------------------
-- Tokens
-- Use qualified import to have token parsers on toplevel
-----------------------------------------------------------
tokenParser = P.makeTokenParser haskellStyle

parens = P.parens tokenParser
braces = P.braces tokenParser
semiSep1 = P.semiSep1 tokenParser
whiteSpace = P.whiteSpace tokenParser
symbol = P.symbol tokenParser
identifier = P.identifier tokenParser
reserved = P.reserved tokenParser
natural = P.natural tokenParser
charLiteral = P.charLiteral tokenParser
stringLiteral = P.stringLiteral tokenParser

Arith.hs


module Arith where



data ArithExpr
= TmTrue
| TmFalse
| TmIfExpr ArithExpr ArithExpr ArithExpr
| TmZero
| TmSucc ArithExpr
| TmPred ArithExpr
| TmIsZero ArithExpr
deriving (Show, Eq)

isNumericalVal :: ArithExpr -> Bool
isNumericalVal TmZero = True
isNumericalVal (TmSucc t) = isNumericalVal t
isNumericalVal (TmPred t) = isNumericalVal t
isNumericalVal _ = False



isVal :: ArithExpr -> Bool
isVal TmTrue = True
isVal TmFalse = True
isVal t
| isNumericalVal t = True
| not (isNumericalVal t) = False
isVal _ = False



eval1 :: ArithExpr -> Maybe ArithExpr

eval1 (TmIfExpr TmTrue t _) = Just t

eval1 (TmIfExpr TmFalse _ t) = Just t

eval1 (TmIfExpr t1 t2 t3) =
let t1' = eval1 t1
in case t1' of
{ Just t1'' -> Just $ TmIfExpr t1'' t2 t3
; Nothing -> Nothing --Just $ TmIfExpr t1 t2 t3
}

eval1 (TmSucc t) =
let t' = eval1 t
in case t' of
{ Just t'' -> Just $ TmSucc t''
; Nothing -> Nothing --Just $ TmSucc t
}

eval1 (TmPred TmZero) = Just TmZero

eval1 (TmPred (TmSucc t))
| isNumericalVal t = Just t

eval1 (TmPred t) =
let t' = eval1 t
in case t' of
{ Just t'' -> Just $ TmPred t''
; Nothing -> Nothing -- Just $ TmPred t
}

eval1 (TmIsZero TmZero) = Just TmTrue

eval1 (TmIsZero (TmSucc t))
| isNumericalVal t = Just TmFalse

eval1 (TmIsZero t) =
let t' = eval1 t
in case t' of
{ Just t'' -> Just $ TmIsZero t''
; Nothing -> Nothing -- Just $ TmIsZero t
}

eval1 _ = Nothing

eval t =
let t' = eval1 t
in case t' of
{ Just t'' -> eval t'' --if (t /= t'') then eval t'' else t
; Nothing -> t
}

Solaris network install using Linux DHCP server

This weekend’s tech project was getting an old Sun Ultra 5 up and running with a new version of Solaris, in this case Solaris Nevada b33, so I can play with toys like opensolaris, dtrace, zfs,etc.

This particular machine doesn’t have a cdrom, so in order to get things working I had to do a network install. Or I could have installed a cdrom, since it’s an IDE based machine, but that wouldn’t have been nearly as much fun.

I hosted the install on a Netra T1, so most of the installation instructions were just cut-and-paste from the Sun documentation. Solaris 10 Installation Guide: Network-Based Installations

(Note: The T1 will eventually be providing network services, and live in the basement. It’s a little loud sitting on my desk. That’s why it’s not going to be the Sparc play machine.)

The complicated part was the DHCP server. I already have one on my network, on a Debian Linux box, and didn’t want to try having two of them. That would probably be bad.

In order to supply all of the information for the install, I needed to add a new name space to the dhcp.conf, a class and subclass for the particular hardware type, and some information specific to the machine.

Here’s the relevant pieces from dhcpd.conf:
First the SUNW option namespace, used by the Sun net installation:


option space SUNW;
option SUNW.SrootOpt code 1 = text;
option SUNW.SrootIP4 code 2 = ip-address;
option SUNW.SrootNM code 3 = text;
option SUNW.SrootPTH code 4 = text;
option SUNW.SswapIP4 code 5 = ip-address;
option SUNW.SswapPTH code 6 = text;
option SUNW.SbootFIL code 7 = text;
option SUNW.Stz code 8 = text;
option SUNW.SbootRS code 9 = integer 16;
option SUNW.SinstIP4 code 10 = ip-address;
option SUNW.SinstNM code 11 = text;
option SUNW.SinstPTH code 12 = text;
option SUNW.SsysidCF code 13 = text;
option SUNW.SjumpsCF code 14 = text;
option SUNW.Sterm code 15 = text;
option SUNW.SbootURI code 16 = text;
option SUNW.SHHTPProxy code 17 = text;

Then the class and subclass based on the vendor-class-identifier, which is sent out by the Ultra 5 when it’s trying to DHCP an address.


class "vendor-classes" {
match option vendor-class-identifier;
}

subclass "vendor-classes" "SUNW.Ultra-5_10" {
vendor-option-space SUNW;
option SUNW.SbootURI = "tftp://10.10.1.131/inetboot.SUN4U.Solaris_11-1 ";
option SUNW.SinstIP4 10.10.1.131;
option SUNW.SinstNM = "heimdall";
option SUNW.SinstPTH = "/export/solaris11/install";
option SUNW.SrootIP4 10.10.1.131;
option SUNW.SrootNM = "heimdall";
option SUNW.SrootPTH = "/export/solaris11/install/Solaris_11/Tools/Boot" ;
}

Then, the particular information for the machine I’m trying to boot and install from the net:


host chimera {
hardware ethernet 08:00:20:a2:22:66;
option domain-name "sdowney.org";
option host-name "chimera";
next-server 10.10.1.131;
fixed-address 10.10.1.132;
}

The machine I’m installing onto is chimera, which has the MAC address 08:00:20:a2:22:66. It will get the address 10.10.1.132. The install and boot server are both heimdall, which had IP addresses 10.10.1.131 respectively. The ‘next-server’ tells chimera to netboot from heimdall. I’m calling that out in particular because I wasted about an hour figuring out that I needed that.

Once all that was done, it was a ‘simple’ matter of running boot net:dhcp – install from the openboot OK prompt.

The machine isn’t exactly a screamer by today’s standards, it has a 333Mhz UltraSparc IIi chip in it, but it does have 512Mb of RAM, which covers a multitude of sins. I think I may start over with a larger HD, since the 7G drive that’s in there now doesn’t leave much room for experimentation. I’ll probably go ahead and put a DVDRW drive in there too, even though I don’t need it now.

Brew Day!

Brewer: Steve Downey
Beer: February Ale Style: American Amber Ale
Type: All grain Size: 5.5 gallons
Color:
13 HCU (~9 SRM)
Bitterness: 38 IBU
OG: 1.052 FG: (Est)
1.012
Alcohol: 5.2% v/v (4.1% w/w) (Estimated)
Grain: 2 lb. Weyermann Dark Wheat
10 lb. Weyermann Vienna
1 lb. Weyermann Cara Amber
Mash: 60% efficiency
Boil: 60 minutes SG 1.044 6.5 gallons
Hops: 1 oz. Cascade (5.3% AA, 60 min.)
1 oz. Cascade (5.3% AA, 30 min.)
1 oz. Cascade (5.3% AA, 5 min.)


Recipe formatting and calculations by The Beer Recipator.

Mashed with a single decoction. Doughed in with three gallons of water to 135 degrees F. Let rest for 15 minutes, then pulled a third of the thick mash and boiled for 15 minutes. Added back, and added hot liquor to reach 158 degrees F, and let rest for about an hour (during a trip to Target with the kids…)

Sparged with another 4 gallons of liquor and collected 6.5 gallons of sweet wort. Estimated efficiency, 60%. Low. Should probably investigate.

Added the first hops in while collecting the runoff.

Boiled for 60 minutes, with two hop additions at 30 minutes and at 5. Also added 1 tsp of irish moss at 15 minutes.

Collected 5.5 gallons of 1.052 wort, and pitched 20 grams of Safeale 33 yeast.

Total time, around 6 1/2 hours. Although I still have some clean up.

Bill de hÓra: I think I figured out the list comprehensions thing…

Bill de hÓra: I think I figured out the list comprehensions thing…

I’ve been trying to understand this stuff myself, and Bill de hÓra’s post has prodded me to write this down so I won’t forget it again.

List comprehensions are really just syntatic sugar. And too much syntatic sugar can cause truth decay.

List comprehensions are forms in functional and related languages that allow you to generate an entire list with a description of the list. So, for example, I can get a list of the first 10 squares by:

[ x * x | x [1,4,9,16,25,36,49,64,81,100]

That gives me a list of the squares of x, where x is an element of the list of integers from 1 to 10. I can also add some more qualifiers, like

[ x * x | x 3]

[16,25,36,49,64,81,100]

[i * j | i 5]

[5,8,10,9,12,15,8,12,16,20,5,10,15,20,25]

This is all pretty neat, but what does it really mean?

You ‘normally’ think of drawing a variable from each of the generating lists, with the right most one varying most quickly, and skipping if the variables fail to meet the condition. This provides a natural analogue to the looping constructs in most programming languages.

for (int i = 1; i 
for (int j = 1; j
if ((i + j) > 5) {
list.push(i*j);
}
}
}

That close, natural, analogue can be a code smell in functional programming. It may be an indication that you’re thinking of the problem in terms of loops, updating variables, etc.

The intent of list comprehension is to simulate loops and conditions, but it can be defined in terms of map and list concat. (As far as I can tell, this is due to P. Wadler, in Simon Peyton-Jones’s The Implementation of Functional Programming Languages )


[t | x map (\x -> t) u
[t | p, q] ==> concat [ [t | q] | p ]
[t | b ] ==> if (b) then [t] else []

Note that the two qualifications p and q are reversed when translated.

concat will take the results of the simplified comprehension and concatenate all of the resulting lists together, eliminating empty lists.

Lets take the first, simple, example:

[ x * x | x that translates to:

map (\x -> x * x) [1..10]

The next example

[ x * x | x 3]

takes a few more steps

concat [ [x * x | x> 3] | x
concat ( map (\x -> [x * x | x>3]) [1..10] )

concat (map (\x -> (if (x>3) then [x*x] else [])) [1..10])

In this case, the list comprehension is more concise. On the other hand, the expanded version is a bit more suggestive. Particularly if I’m willing for it not to be a lambda expression.

OktAle / Novemberfest

OktAle / Novemberfest

I finished up the keg this week. That was pretty fast, for me. I usually end up with a few stray bottles of a brew hanging on forever. But, since the keg is really all or nothing, I just finished it up.

The only down side is that I don’t have another brewing at the moment. I’ll have to get to work on that this weekend. Probably a basic Pale Ale, consisting of whatever they have on hand at my local homebrew store, Karp’s Homebrew Shop.