Showing posts with label Code Reuse. Show all posts
Showing posts with label Code Reuse. Show all posts

29 April 2012

What is Generic Programming?



Over the course of my career I have strived to write reusable code, always looking for more general patterns and trying to refactor code to more flexible more generic versions sometimes with mixed success.  Lately it has become something of an obsession of mine in that I both try to employ it and have been endeavoring to learn more about it, what it really is, and how to do it right, as I have previously pondered I believe there is a complexity cost with increased reuse.  Over the last few years, which I have been programming in Java, generics has become a central tool in achieving reuse.  So such a concept comes to mind but it is just one idea in a larger framework of ideas on generic programming.


There is a fair amount of research and discussion of the ideas pertaining to generic programming, the title of this post is taken directly from a paper on the topic, which perhaps not so coincidently was presented at the same OOPSLA (2005) as the paper on Software Reuse and Entropy that I previously discussed.  It just seems like every idea I explore is super-interrelated to everything else and this post will be no exception, I can conceptually connect these two papers using a Kevin Bacon like approach using just two other research papers, ideas I have for other posts.  I will try to keep this one as focused as possible.  Also the referenced paper is, like the Software Framework paper, very math heavy but different math, Category Theory,Order Theory, Universal Algebra, etc. This post will focus on more high level concepts and not the details of math that I wish I could claim that I fully understand.


So back to our question, what is generic programming?   For me it is about reuse, and while the information entropy approach to understanding reuse was about reuse from perspective of measure and the reuse (measurable) variation on different domains, I feel that the generic programming concept is about the structure of reuse and how it is implemented in various programming paradigms but the concept itself transcends language specific interpretations and implementations.


As you can tell I am not sure how to exactly answer this question so I will defer to the experts.


"What is generic programming?" in part outlines the paradigmatic distinctions between mainstream OO languages (Java Generics, C++ with STL) and Functional languages (Haskel, Scheme):


As for most programming paradigms, there are several definitions of generic programming in use. In the simplest view generic programming is equated to a set of language mechanisms for implementing type-safe polymorphic containers, such as List<T> in Java. The notion of generic programming that motivated the design of the Standard Template Library (STL) advocates a broader definition: a programming paradigm for designing and developing reusable and efficient collections of algorithms. The functional programming community uses the term as a synonym for polytypic and type-indexed programming, which involves designing functions that operate on data-types having certain algebraic structures.

In "Generic Programming" written in 1988 by David A. Musser and Alexander A. Stepanov use Ada and Scheme and describe generic programming as:


By generic programming, the definition of algorithms and data structures at an abstract or generic level, thereby accomplishing many related programming tasks simultaneously.  The central notion is that of generic algorithms, which are parameterized procedural schemata that are completely independent of the underlying data representation and are derived from concrete efficient algorithms.

...

A discipline that consists of the gradual lifting of concrete algorithms abstracting over details, while retaining the algorithm semantics and efficiency.


That last sentence has a very Agile/Refactoring feel to it.


Ralf Hinze in "Generic Programs and Proofs" describes it as: 


A generic program is one that the programmer writes once, but which works over many different data types.

...

Broadly speaking, generic programming aims at relieving the programmer from repeatedly writing functions of similar functionality for different user-defined data types. A generic function such as a pretty printer or a parser is written once and for all times; its specialization to different instances of data types happens without further effort from the user. This way generic programming greatly simplifies the construction and maintenance of software systems as it automatically adapts functions to changes in the representation of data.

...

So, at first sight, generic programming appears to add an extra level of complication and abstraction to programming. However, I claim that generic programming is in many cases actually simpler than conventional programming.  The fundamental reason is that genericity gives you "a lot of things for free" ...


This last statement resonates with me as it parallels my thoughts on the desirability of using software frameworks and I feel that I continuously have this conversation (argument) with junior level programmers and also senior level cargo cult coders who don’t get it.


Design Patterns are a common generic concept in mainstream programming.  It turns out that a number of design patterns can be described in the framework laid out by these papers and a number of papers do exactly that including "Design Patterns as Higher-Order Datatype-Generic Programs " part of extensive research on the topic by Jeremy Gibbons:


Design patterns, as the subtitle of the seminal book has it, are ‘elements of reusable object-oriented software’. However, within the confines of existing mainstream programming languages, these supposedly reusable elements can only be expressed extra-linguistically: as prose, pictures, and prototypes. We believe that this is not inherent in the patterns themselves, but  evidence of a lack of expressivity in the languages of today. We expect that, in the languages of the future, the code parts of design patterns will be expressible as directly-reusable library components. The benefits will be considerable: patterns may then be reasoned about, typechecked, applied and reused, just as any other abstractions can. Indeed, we claim that the languages of tomorrow will suffice; the future is not far away. All that is needed, in addition to what is provided by essentially every programming language, are higher-order (parametrization by code) and datatype-generic (parametrization by type constructor) features. Higher-order constructs have been available for decades in functional programming languages such as ML and Haskell. Datatype genericity can be simulated in existing programming languages [2], but we already have significant experience with robust prototypes of languages that support it natively [2]. We argue our case by capturing as higher-order datatypegeneric programs a small subset ORIGAMI of the Gang of Four (GOF) patterns. (For the sake of rhetorical style, we equate ‘GOF patterns’ with ‘design patterns’.) These programs are parametrized along three dimensions: by the shape of the computation, which is determined by the shape of the underlying data, and represented by a type constructor (an operation on types); by the element type (a type); and by the body of the computation, which is a higher-order argument (a value, typically a function).  Although our presentation is in a functional programming style, we do not intend to argue that functional programming is the paradigm of the future (whatever we might feel personally!).  Rather, we believe that functional programming languages are a suitable test-bed for experimental language features - as evidenced by parametric polymorphism and list comprehensions, for example, which are both now finding their way into mainstream programming languages such as Java and C#.  We expect that the evolution of programming languages will continue to follow the same trend: experimental language features will be developed and explored in small, nimble laboratory languages, and the successful experiments will eventually make their way into the outside world. Specifically, we expect that the mainstream languages of tomorrow will be broadly similar to the languages of today — strongly and statically typed, object-oriented, with an underlying imperative mindset — but incorporating additional features from the functional world — specifically, higher-order operators and datatype genericity

Lately I have been making more of an effort to learn Scala. Bruno Oliveira and Jeremy Gibbons make the case in "Scala for Generic Programmers" that it might be one of or even the best language for expressing Generic Programming concepts in part due to its hybridization of object oriented and functional paradigms. Here is the abstract in its entirety:


Datatype-generic programming involves parametrization by the shape of data, in the form of type constructors such as ‘list of’.  Most approaches to datatype-generic programming are developed in the lazy functional programming language Haskell. We argue that the functional object-oriented language Scala is in many ways a better setting. Not only does Scala provide equivalents of all the necessary functional programming features (such parametric polymorphism, higher-order functions, higher-kinded type operations, and type- and constructor-classes), but it also provides the most useful features of object-oriented languages (such as subtyping, overriding, traditional single inheritance, and multiple inheritance in the form of traits). We show how this combination of features benefits datatype-generic programming, using three different approaches as illustrations.

So far for me Scala has been a joy.  Its features and flexibility are beautifully elegant of course I am tainted coming from the clumsy, clinking, clanking, clattering caliginous world of Java, BTW Java we need to talk, it’s not you, it’s me. I’ve met someone else.


Jeremy Gibbons Makes the following point in "Datatype-Generic Programming":


Generic programming is about making programming languages more flexible without compromising safety. Both sides of this equation are important, and becoming more so as we seek to do more and more with computer systems, while becoming ever more dependent on their reliability.

This idea is similar to ideas that Gerald Sussman explores in his talk at Strangeloop 2011 "We Really Don’t Know How To Compute".  However, his general and perhaps philosophical approach to the idea of generic programming is very different from the previous ideas:    


[8:40]We spend all of our time modifying existing code, the problem is our code is not adequately evolvable it is not adequately modifiable to fit the future in fact what we want is to make systems that have the property that they are good for things that the designer didn’t even think of or intend.  ... That’s the real problem. ...  The problem is that when we build systems whatever they are we program ourselves into holes. ...  It means that we make decisions early in some process that spread all over our system the consequences of those decisions such that the things that we want to change later are very difficult to change because we have to change a very large amount of stuff.  ... I want to figure ways that we can organize systems so that the consequence of the decisions we make are not expensive to change, we can’t avoid making decisions but we can try to figure out ways to minimize the cost of changing decisions we have made.

...

[11:40]Dynamically extensible generic operations  ...  I want to dynamically extend things while my program is running  ... I want a program to compute up what it is going to do.


He shows an example[19:00] which is "the definition of the method of computing Lagrange Equations from a Lagrangian given a configuration space path". This is apparently easy stuff, to him at least.  Fortunately you don’t really need to understand the details of this math to understand his points here:


[20:45]This is a little tiny feeling of what is it needed to make things more flexible consider the value here, the value is that I am dynamically allowed to change the meaning of all my operators to add new things that they can do. Dynamically. Not at compile time, at runtime.

...

[21:15]It gives me tremendous flexibility. Flexibility because the program I wrote to run on  a bunch of numbers with only a little bit of tweaking suddenly runs on matrices so long as I didn’t make mistake of commuting something wrong. ... So this makes proving the theorems very hard In fact maybe impossible the cost and the benefits are very extreme. I can pay  correctness or proofs of correctness or belief in correctness, I can pay that for tremendous flexibility. ... Is correctness the essential thing I want, I am not opposed to proofs, I love proofs I am an old mathematician. The problem is that putting us into the situation that Mr. Dijkstra got us into ... where you are supposed to prove everything to be right before you write the program getting into that situation puts us in a world where we have to make the specifications for the parts as tight as possible because it’s hard to prove general things, except occasionally it’s easier but it’s usually harder to prove a general theorem about something so you make a very specialized thing that works for a particular case you build this very big tower of stuff and boy is that brittle change the problem a little and it all falls over.


His ideas tend towards adaptable and self modifying code at one point he talks about self modifying code in both assembly language and the use of eval in Lisp.  He also touches on a number of parallels in the biological world including genetics and the adaptability of the human immune system to fight off mutating parasites.  Interestingly some of his ideas seem similar to work done by Stephanie Forrest on using evolutionary/genetic algorithms for program repair which she speaks about at the keynote of SPLASH 2010, formerly known as OOPSLA.  The genetic repair approach at the conceptual level is striving for the same results as what Gerald Sussman describes.  Also did you notice that the Levenshtein Distance between "generic programming" and "genetic programming" is (one substitution), pretty weird!


Seriously though, the objectives of all of these approaches are the same: building better more flexible software that handles more situations. These ideas are reflected in how Glenn Vanderburg describes software engineering:


Software engineering is the science and art of designing and making, with economy and elegance, systems so that they can readily adapt to situations which they may be subjected.

I feel this Software Engineering insight reinforces my opinion that we will eventually see a real math backed discipline of software engineering and I think that the research work discussed  here both supports my conjecture on this point and most likely lays part of that foundation.  I also think some of these ideas reflect ideas I discussed in "The Software Reuse Complexity Curve" which implies that programmers will need to become more aware and more adept at more advanced theoretically based concepts like for example GADT’s (Generalized algebraic datatypes), etc. and of course math in general such as Abstract Algebra.


Clearly the idea of Generic programming has some very specific ideas and some very general objectives. There seem to be two approaches here which might described Predictive vs. Adaptive, much like the idea of Separation of Design and Construction described by Martin Fowler.  The Gibbons et al more calculational approach is predictive whereas the Sussman/Forrest ideas describe an adaptive approach. I suspect that this will just be another dimension of program design that will have to be considered and assessed as a tradeoff in its application, which Sussman also mentions.

05 February 2012

The Software Reuse Complexity Curve

I have come to the conclusion that Complexity and Reuse have a nonlinear relationship that has a parabolic shape, shown below, Complexity increases on the Y axis and reuse increases on the X axis.  Code complexity is high both with very low reuse and very high reuse forming a minima which might be viewed as an optimization point for reusability, now remember this is a very simple two dimensional graph, a real optimization scenario for this type of problem may include many dimensions and probably would not have a nice clearly defined single minima.




Scenario #1 Low Reuse


In a low reuse situation everything that could have been implemented using reusability is duplicated and may also be implemented inconsistently, so the code base will be larger which would require a maintainer to learn and know more code, also since there will be variations of varying degree between the similar components the maintainer would have to be constantly aware of these differences, which might be subtle, one might even find them confusing at times.


As we have previously seen reuse relates to entropy, so a low reuse system would have a large random, “uncompressed” codebase, which has high entropy, I have seen this type of system in my career and it is something I hate to work on.


Skill vs. Complexity


Before I go to the opposite end of the spectrum, which is probably more interesting, and perhaps more speculative on my part, I’d like to talk about the relationship of code complexity to maintainer developer skill.


I believe, and this really shouldn’t be a stretch, that there are some types of complex code that require certain higher level proficiencies and theoretical knowledge, an obvious choice is parsers and compiler construction.  A developer who works in this area needs to have a good understanding of a number disciplines, which include but are not limited to: Formal Language Theory and Automata Theory and possibly some Abstract Algebra, which sadly many developers lack.  If you had a project that had components that are any type of compiler or parser and needed someone to create and subsequently work on the maintenance of the code, they would need to know these things, they would need to understand this complexity in an organized theoretical way.  The alternative would be an ad hoc approach to solving these problems which would probably yield significant complexity issues of its own.


These types of things can be seen at simpler scale, a system that makes use of state machines or heavy use of regular expressions will exhibit more compact and more complex code.  The maintainers have to have a skill level to deal with this more complex approach, sadly not all developers do.


Now you can probably find numerous code bases where these types of approaches and others were not used where they should have been, these codebases will be larger and will require a case by case effort to learn something that could have been “compressed” and generalized  with better theoretical understanding.  It’s a trade off if you cannot hire or retain people who have the advanced skills an advanced codebase can also be a maintenance liability.


Scenario #2 high reuse


Now let’s create a hypothetical example, say we have a system that has many similar components and a brilliant programmer manages to distill this complexity into compact highly reusable code.  The problem is that our brilliant programmer is the only one who can really understand it.  Ideas similar to this are pondered by John Cook in his post "Holographic Code".  Now it might be the case that it code that is really only understandable to our brilliant programmer, or he may be the only one in the organization with that level of proficiency, also in this case we will assume that complexity was not artificially introduced, i.e. not unnecessary complexity, which is also a real problem that arises in software systems.  Also we will assume that we know that a less complex solution that is more verbose with less reuse exists, and in fact the solution of the maintainer developer might be to completely rewrite it in a less complex fashion.



As we have previously seen, the complete compressibility of a code base can never be achieved which relates heavily to aspects of both Information Theory and Algorithmic Information theory this results in  diminishing returns in terms of shrinking an algorithm or codebase.  My speculation here, based on observation, is that as a codebase is "compressed" its complexity increases as shown above.  Essentially you need to be increasingly clever to continue to increase reuse.  Now some of these reuse complexity issues may be due to our lack of understanding of complexity in relation to software construction.  It is feasible that things that we find hard to gain reuse with today may be easier in the future with advances in this understanding.


Another idea from the research covered in my previous post is that different domains exhibit different degrees of Favorability/Resistance to reuse. I would also suspect that the types of reusable components would also vary across different domains this makes me think that perhaps in the future reusability in different domains can be guided both qualitatively and quantitatively by either empirical or theoretical means or both. This means that you might have fore-knowledge of the "compressibility" of a "domain solution" and then you would be able to guide the development effort appropriately.


I feel the ideas here are a step back from a possible absolutist thinking to reuse.  Reuse like any important approach in software, for example test coverage, which can also exhibit diminishing returns, needs to be tempered and viewed as an optimization as to the extent of its practice.  In many respects that’s what good engineering is, working within constraints and optimizing your outcome within those constraints.

31 July 2011

Software Frameworks: Resistance isn’t Futile

As I have previously discussed, in my opinion there are three main framework components that can be described succinctly as, libraries, rules, and templates. It is the library component that I wanted to talk about here, perhaps in a context that might be seen as more evidence in support of frameworks in certain cases. Now to recap, building a framework does not make sense for all projects, the two main scenarios that I have seen that are extremely conducive to it are an organization with multiple similar projects with similar problem domains. The second case is a medium to large project with a lot of commonality that would favor reusability across the project. One of my complaints about the Framework debate is that it is debated as a black and white argument. Either frameworks are absolutely required or the worst thing you can do. Now I am sure that many developers can anecdotally cite either side of this argument, which is what I feel really drives this debate and there is no doubt that I do this as well, but the goal, in my opinion, is to step back and look at this problem from a bigger perspective.

One place that I have found some interesting perspective is in a paper by Todd L. Veldhuizen titled: "Software Libraries and Their Reuse:Entropy, Kolmogorov Complexity, and Zipf’s Law", there is a slide version here, now a word of caution this paper is very math intensive, but it should be possible to read it and gain some insights without understanding the math, for example in the paper he states the following:

A common theme in the software reuse literature is that if we can only get the right environment in place— the right tools, the right generalizations, economic incentives, a "culture of reuse" — then reuse of software will soar, with consequent improvements in productivity and software quality. The analysis developed in this paper paints a different picture: the extent to which software reuse can occur is an intrinsic property of a problem domain, and better tools and culture can have only marginal impact on reuse rates if the domain is inherently resistant to reuse.

I think this is a good observation, many projects that I have worked on have exhibited characteristics that are favorable to reuse, but I have read a number of counter arguments especially by people in fast paced startups where the flux of the system evolution is potentially resistant to reusability, actually in this case it’s probably the SDLC, not necessarily the problem domain that is resistant. Also I would suspect the probability of reuse is in part inversely proportional to system size, so for small systems it’s less likely or at least will have a smaller set of reusable components, so an investment in reuse may not be seen as justified.

Another interesting observation from the paper is:

Under reasonable assumptions we prove that no finite library can be complete: there are always more components we can add to the library that will allow us increase reuse and make programs shorter. To make this work we need to settle a subtle interplay between the Kolmogorov complexity notion of compressibility (there is a shorter program doing the same thing) and the information theoretic notion of compressibility (low entropy over an ensemble of programs).

This is especially interesting if you have some familiarity with Information Theory, and if you don’t I recommend learning more about it. Here he is comparing characteristics of both Algorithmic Information Theory [Kolmogorov complexity] and Shannon’s Information Theory [information theoretic notion].  Roughly, Algorithmic Information Theory is concerned with the smallest algorithm to generate data and Shannon’s Information Theory is about how to represent the data in the most compact form.  These concepts are closely related to data compression and in the paper this is paralleled to the idea that reusing code, will make the system smaller in terms of lines of code, or more specifically: symbols, which effectively "compresses" the codebase.  In Algorithmic Information theory you can never really know if you have the smallest algorithm so I may be taking some liberty here, but I think the takeaway is that when trying to create reuse you can probably do it forever so one needs to temper this desire with practicality. In other words there is probably a point where any subsequent work towards reuse is a diminishing return.

I find the paper compelling and I confess that perhaps I am being bamboozled by math that I still do not fully understand, but intuitively these ideas feel right to me.  Also the application of Zipf’s law is interesting and should be pretty intuitive, once again roughly, Zipfs law relates to power curve distributions, also related to the 80/20 rule, the prime example is the frequency of English words in text, words like [and, the, some, in, etc.] are much, perhaps orders of magnitude, more common than words like [deleterious, abstruse, etc.].  This distribution shows up in things like the distribution of elements in the universe, think hydrogen vs. platinum, the wealth distribution of people you vs. Bill Gates, how many followers people have on twitter, etc. and to a smaller scale, the curve is scale invariant, in software, often some components will have a fair amount of reuse, things string copy functions, entity base classes, etc., whereas others may only have a couple of reuses.

On the lighter side, Power curves relate to Chaos Theory, I have seen a number of people including Andy Hunt draw parallels between Agile and Chaos theory, although these are usually pretty loose, it does strike me that one way to model chaos is through iterated maps, which is reminiscent of the iterative process of agile, also the attractor and orbit concepts seem to parallel the target software system as well.

Another place, and this is one that most developers will find more accessible, I would have lead with this but the title and flow leaned the other way, is Joshua Bloch’s "How To Design A Good API and Why it Matters".  Actually I think this a presentation that every developer should watch especially if you are involved in high level design and architecture related work, and don’t worry, there is no math to intimidate the average developer.  A summary article version can be found here, but I would still recommend watching the full video at least once if not multibple times. Slides to an older version of the talk can be found here.

In his presentation he talks about getting the API right the first time because you are stuck with it. I think this helps illuminate one very important and problematic aspect of framework design and software development.  The problem can be illustrated with a linguistic analogy, in linguistics there are two ways to define rules for a language: prescriptive, is where you apply (prescribe) the rules on to the language vs. descriptive where the rules describe the language as it exists, I strongly favor the descriptive.  One of the most famous English language rules: "not to end sentences with prepositions" is a prescriptive rule thought to be from Latin which was introduced by John Dryden and famously mocked by Winston Churchill "This is the sort of nonsense up with which I will not put.", pointing out that it really doesn’t always fit a Germanic language like English.  I know I’m a bit off topic again with my "writer’s embellishment", not to mention that the same idea is discussed in depth by Martin Fowler which he terms "Predictive versus Adaptive". 

It is a common problem which Agile attempts to address and it is common in general software design and construction, it also occurs API and framework design.  Software construction as we know is an organic process and I feel that frameworks are best developed in part out of that process though Martin Fowler’s Harvesting which can be termed as descriptive framework creation.  What Joshua Bloch in part describes and to some degree cautions against can be described as a prescriptive approach to API/Frameworks.  I think many developers including me have attempted to create framework components and API’s early in a project usually driven by a high level vision of what the resulting system will look like only to find out later that certain assumptions were not valid or certain cases were not accounted for1.  What he talks about is a pretty rigid prescriptive model which is in many ways at odds with an adaptive agile approach, I feel that the more adaptive agile approach is really what is needed for the framework approach and we do see this via versioning, for example the differences between Spring 1.x and Spring 3.x are substantial and no one would want to use 1.x now, but there are apps that are tied to it now.  Also this approach of complete backwards compatibility was used with Java Generics, specifically Type Erasure which has lead to a significant weakening of the implementation of that abstraction.  It is my understanding that Scala has recently undergone some substantial changes from version to version leading some to criticize its stability while others cite that it is the only way to avoid painting yourself into a corner like with Java.  The harvesting approach will often involve refinement and changes to the components which are extracted and this can lead to the need for refactorings that can potentially affect large amounts of already existing code. It’s a real chicken and egg problem.

He starts his presentation with the following Characteristics of a Good API:

Characteristics of a Good API
  • Easy to learn
  • Easy to use, even without documentation
  • Hard to misuse
  • Easy to read and maintain code that uses it
  • Sufficiently powerful to satisfy requirements
  • Easy to extend
  • Appropriate to audience

He also makes the following point:

APIs can be among a company's greatest assets

I think this is sentiment is reflected in Paul Graham’s "Beating the Averages" of course that is more about Lisp but underlying principle is the same, actually an interesting language agnostic point comes from Peter Norvig, I can’t find the reference, but he said he had a similar attitude towards Lisp until he got to Google and saw good programmers who were incredibly productive in C++. I feel that this is all just the framework argument, maximizing reusability by building reusable high level abstractions within your problem domain that allow you to be more productive and build new high level components more quickly, it’s all about efficiency.

In regards to API’s he adds:

API Should Be As Small As Possible But No Smaller

To which he adds:

  • When in doubt leave it out.
  • Conceptual weight is more important than the bulk - The number of concepts.
  • The most important way to reduce weight is reusing interfaces.

He attributes this sentiment to Einstein, but he wasn’t sure about it, I did some follow up the paraphrasing is "Everything should be made as simple as possible, but no simpler." or "Make things as simple as possible, but not simpler." More about that can be found here. These are good cautions about over-design or over-engineering APIs, Frameworks, and Software in general, I have definitely been guilty of this at times, once again this is something that needs to be balanced.

The following is what I consider to be seminal advice:

All programmers are API designers because good programming is inherently modular and these inter modular boundaries are API’s and good API’s tend to get reused.

As stated in his slides:

Why is API Design Important to You?
  • If you program, you are an API designer
    • Good code is modular–each module has an API
  • Useful modules tend to get reused
    • Once module has users, can’t change API at will
    • Good reusable modules are corporate assets
  • Thinking in terms of APIs improves code quality

The next two quotes are pretty long, and it was no easy task transcribing them, he talks really fast, I tried to accurately represent this as best as possible, also I feel that he really nails some key ideas and I wanted to have a written record of it to reference, since I am not aware of any other:

Names matter a lot, there are some people that think that names don’t matter and when you sit down and say well this isn’t named right, they say don’t waste your time let’s just move on, it’s good enough. No! Names, in an API, that are going to be used by anyone else that includes yourself in a few months mater an awful lot.  The idea is that every API is kind of a little language and people who are going to use your API needs to learn that language and then speak in that language and that means that names should be self explanatory, you should avoid cryptic abbreviations so the original Unix names, I think, fail this one miserably.

He augments these ideas by adding consistency and symmetry:

You should be consistent, it is very important that the same word means the same thing when used repeatedly in your API and you don’t have multiple words meaning that same thing so let us say that you have a remove and a delete in the same API that is almost always wrong what’s the difference between remove and delete, Well I don’t know when I listen to those two things they seem to mean the same thing if they do mean the same thing then call them both the same thing if they don’t then make the names different enough to tell you how they differ if they were called let’s say delete and expunge I would know that expunge was a more permanent kind of removal or something like that. Not only should you strive for consistency you should strive for symmetry so if you API has two verbs add and remove and two nouns entry and key, I would like to see addEntry, addKey, removeEntry, removeKey if one of them is missing there should be a very good reason for it I am not saying that all API’s should be symmetric but the great bulk of them should. If you get it right the code should read like prose, that’s the prize.

From the Slides:

Names Matter–API is a Little Language
  • Names Should Be Largely Self-Explanatory
    • Avoid cryptic abbreviations
  • Be consistent–same word means same thing
    • Throughout API, (Across APIs on the platform)
  • Be regular–strive for symmetry
  • Code should read like prose

I feel that this hits some essential concepts which really resonate with me, in fact I have follow on posts planned to further deconstruct and develop these ideas more generally.  From the framework perspective this also gets at some of the variety of the Framework code components, they can be the Paul Graham’s functional Lisp abstractions, they can be DSL’s, they can be Object Oriented like Spring, Hibernate and Java API, etc.   Any framework built for a domain will have their conceptual vocabularies or API languages that are a higher level abstraction of the problem domain, Domain Specific Abstractions, and they all benefit from concepts like consistency and symmetry as appropriate to the domain.

The following is a very common and widespread problem that often inhibits reuse and leads to less efficient production of lower quality software:

Reuse is something that is far easier to say than to do. Doing it requires both good design and very good documentation. Even when we see good design, which is still infrequently, we won’t see the components reused without good documentation.

- D. L. Parnas, Software Aging. Proceedings of the 16th International Conference on Software Engineering, 1994.

He adds:

Example Code should be Exemplary, one should spend ten times as much time on example code than production code.

He references a paper called "Design fragments" by George Fairbanks also here which looks interesting but I have not had time to read it yet.

This is also, I believe, to be a critical point that is possibly symptomatic of problems with many software projects. I feel that projects never explicitly allow for capturing reuse in terms of planning, schedule and developer time.  Often reuse is a lucky artifact if you have proactive developers who make the extra effort to do it and it can often be at odds with the way I have seen projects run (mismanaged).  I have some follow up planned for this topic as well.

In terms of design he adds this interesting point:

You need one strong design lead to that can ensure that the api that you are designing is cohesive and pretty and clearly the work of one single mind or at least a single minded body and that’s always a little bit of a trade off being able to satisfy the needs of many costumers and yet produce something that is beautiful and cohesive.

I have to confess that I do not recall how I came across Todd Veldhuizen’s paper or Joshua Bloch’s talk, but I felt that they were really about similar ideas, in writing this and finding all of the references again I realized that my association of these two was not coincidental at all.  For they are both part of the Library-Centric Software Design LCSD'05 workshop for Object-Oriented Programming, Systems, Languages and Applications (OOPSLA'05) with Joshua Bloch delivering the same talk as the Keynote Address.

Now I admit that one goal of mine is to put these ideas, much of which was borrowed from the two referenced works, into a written form that I can reference back to, I hope this stands up by itself but is really written for my future entries. Also, for the record this is not the first time I have "leached" off of Joshua Bloch’s work.

Ultimately my ideas will diverge from some of those in regards to API design, Joshua Bloch speaks from a position of greater responsibility in terms of APIs. I see them as part of a framework continuum used to construct software and I think many of his ideas apply directly and more generally to that.  Also I see framework and the system as interlocked and feel that frameworks can drive structure and consistency for example Object-Oriented API’s aka frameworks can in turn drive the structure of software, the Spring Framework is a good example of this, Spring is build heavily around IOC which is one of the SOLID principles.

There will always be arguments against frameworks, the classic one is it creates more code that is more complex and it requires more learning, my counter argument to this is twofold: First any code that is in production probably has the need to be known and understood which might require it to be learned regardless of how efficiently it is created.  Also if a framework creates reusability and consistency the initial learning curve will be higher but each subsequent encounter with a codebase that constructed this way should be easier. Also highly redundant inconsistent code is potentially (much) more difficult to learn and maintain because there is more of it.  The second is if your framework API is well defined and well documented it should make the resultant code much easier to understand and maintain aka "read like prose".  This will be due to the fact that much of the "generic" complexity is "abstracted" downwards into the framework level.  For example compare the code to implement a Spring Controller to the underlying classes that do the work such as AnnotationMethodHandlerAdapter Now if you have defects and issues at that lower level they will be harder to fix and changing common code can have side effects to other dependant code, it’s not a perfect world.

I think the issue with reuse and the framework approach is asking the right question: How resistant (or favorable) is you domain and your SDLC to reuse?  I think most domains have some if not a fair amount of favorability to reuse and I see reuse as increased efficiency in software construction. 

1Perhaps: "...certain cases for were not accounted." Dryden blows.