Tuesday, May 06, 2014

What is the opposite of fragile?
Perhaps you answer "robust" or something similar? Not true. Somthing fragile gets worse when stressed. Something robust is neutral to stress. The opposite of fragile is something that improves with disorder and stress. There is no word for that property, so we call it "antifrgaile", a term coined by Nassim Nicholas Taleb. The concept is studied in great detail and from many angles in his excellent book Antifragile - Things that gain from disorder. I recommend it to anyone.
There are a number of examples of antifragile systems in nature: our skeleton and muscles. Our immune system. They must be stressed to become effective. Genes become more fit when subjected to varying stresses.
There are fewer examples of man-made antifragile systems, but anyone working in the software field will recognize the need. Our systems, and systems development efforts, are fragile. They fail when stressed, in particular when the stress is of an unexpected kind. This is unacceptable given software’s central role in our civilization. Many organizations today are helpless when their systems fail. Some organizations don't even exist in any meaningful way outside their systems (Facebook, Twitter, etc). We need to find ways to build antifragile systems, systems that improve under stress. Is that even possible? I think it may be.
One example would be Bittorrent. When a torrent becomes popular the load increases, but each downloader (leecher) also provides upload of the same torrent to others, so the more people download a torrent the more available it becomes. Bittorrent is antifragile to load.
Another example could be an optimizing compiler that uses runtime instrumentation of a system as input when compiling the system. The more uses cases we subject the system to, the better the optimization will be the next time. This system is antifragile to usage.
In both these cases there is a feedback mechanism, implicit in the case of Bittorrent and explicit in the compiler. I would be happy to hear of more examples of antifragile systems. This is not a concept software developers in general are familiar with today, but I think they will have to be in the future.

Monday, May 03, 2010

Software for Simulation and Learning

Software is usually written for effect. We want it to do something for us. But there is another way to look at software. A program becomes a kind of rather dry textbook on its subject, with the extra benefit of being runnable. When writing this "textbook" we need to learn a lot about the domain of the system, sometimes deeper and more precisely than even experts in the field need to. They can often get away with handwaving but there is no way to wave your hands at a computer. Well, you can wave them all you like but it won't do you any good.

Once you've figured the domain out, the knowledge has been made explicit in code for others to read. Many intellectual fields stop there, with a textual description. A textbook. The magic of programming is that the program can also run. You can play with it, study its behavior, write unit tests illuminating standard and special cases. The program is not only a formal description, it is also a simulation, a runnable model of some domain, a toy universe. This duality is a very powerful and unusual teaching tool.

Gerald Sussman, of Scheme fame, has expressed this much more eloquently than I can in his lecture "Why programming is a good medium for expressing poorly understood and sloppily formulated ideas"(unfortunately only available to ACM members).
I had the good fortune to see Professor Sussman deliver the lecture at OOPSLA 2005.
Sussman borrowed the title from a 1960s Marvin Minsky paper. The idea is that a poorly understood and sloppily formulated domain will become illuminated and stringently formulated by expressing it in code. A domain that already is well understood and can be expressed formally is also best described and taught in the form of software. Sussman has actually co-written two works incorporating these ideas: "The Structure and Interpretation of ComputerPrograms" which contains software simulations of digital electronics and a register machine (a simple model of a computer), and more explicitly in "The Structure and Interpretation of Classical Mechanics", which is about advanced mechanics.
There are signs that this idea is spreading: the SEC is apparently considering expresing its regulations as Python code in the future.

All programmers have heard that code is read many more times than it is modified, that you should code for humans, not for the compiler, etc. This advice is given with the intention of making programs maintainable, not with the idea that software is an important knowledge repository and teaching tool in itself. In fact, it is often the ONLY authoritative description of an organization's business rules for instance, because what is in the software is what is actually executed. The only people who really understand a modern business in detail may be the programmers, who quite often are contractors on another continent. They are the only ones exposed to the learning opportunity that the duality of formal description languages and executability provides.

The world is not just about business, there's also conspiracy
theories! :)
I think a few of the more bizarre ones could be laid to rest if a detailed open source multi-simulation of an Apollo moon landing was available.

There are fields where simulation and modelling via software has been embraced wholeheartedly: Economics and Climatology.
Unfortunately they are as "poorly understood" as it gets.
The systems they study are vast, complex, full of feedback loops and work on time scales of years and somtimes decades.
Expressing a poorly understood idea formally can give the model an appearance of precision it does not deserve. "There's no sense being exact about something if you don't even know what you're talking about", as John von Neumann supposedly said.
It is so bad that even practitioners in these fields can't distinguish
between model and reality. People talk about efficient markets as though they really exist and regard Modern Portfolio Theory as prescriptive, not as a model dependent on certain assumptions. Extrapolated data from climate models are used to decide the future of the world. On a smaller scale, over-reliance on models caused problems when the volcanic eruptions in Iceland in early 2010 led to most of Europe's airspace being shut down... not based on actual measurements of dust in the atmosphere but on models predicting how the dust should spread. When KLM and Lufthansa actually performed test flights in supposedly particle-fileld areas there was no problem.

For a software model of a domain to be useful it must be testable against some kind of reality. When the domain is digital electronics or classical mechanics this is done by experiments. In a business environment it is achieved by constantly talking to experts and users, and demoing to them. As for climate, we'll just have to wait a few hundred years.

In order to build a Sussmanesque executable encyclopedia of computable knowledge I believe it would be a good idea to look to an existing successful encyclopedia: Wikipedia. Coincidentally, Jimmy Wales also gave a talk at OOPSLA 2005. Open source backed by a strong community, representing many views, insisting on verifiability is the way to go.

Friday, May 09, 2008

Cars, computers and robots

Lateley, I've been thinking about the future. More specifially, which new industries will become important in the forseeable future? Some might say nanotech, quantum computing or artificial general intelligence (AGI). I agree all of those will be important eventually, but I think it will take some time. I’m more interested in the near future (but not next-year-near).
The 20th century saw the rise of a two industries that changed everything: automotive and computing. They have some properties in common:
  • They solve problems common to everyone: transportation and computation/information/communication (although most people didn’t realize they had a need for computation before they had a computer).
  • Both were enabled by new underlying technologies; advances in mechanical engineering/taylorism/refineries and microprocessors/cheap lasers, respectively.
  • Both have given rise to new professions and supporting industries.
  • Both require a common infrastructure. Cars require roads, computers require the internet to be really useful.
  • Both are in some sense universal. All cars can drive on all roads, if you can drive one car type you can drive any other, a car can transport anyone or anything that is small enough to fit inside. Any computer can run any program (via emulation, albeit slowly) as long as it has enough memory and appropriate I/O-devices.
  • Both have affected both consumers and businesses profundly: they have changed everything.

Are there more industries that fit these criteria? Telephony come to mind. I don’t think airlines make the grade, long-distance transportation is not a universal need and airlines don’t shape our homes, streets and workplaces the way the others have. Television is an important consumer technology, but hasn’t affected business much.

Is there anything like automotive and computing on the horizon? I think there is: robots. Robots have the potential to be the “computer” of the physical world.
  • The common problem: a personal assistant. A butler, if you like. Who doesn’t need at least one assistant, either at home or at work?
  • New technologies: cheap sensors. Faster computers, of course.
  • The common infrastructure might be a tagging system to make the physical world easier to navigate. Perhaps a use for RFID tags, a solution looking for a problem if there ever was one.
  • What about universality? The physical world has such a vast range of scale, temperature, pressure, etc that a truly universal robot is hard to imagine. But almost all of us spend our time in human-made environments using tools made for humans. A universal robot in this context is a humanoid human-sized robot which can be programmed with new behaviors as needed.
  • New profession: robot behavior designer is an obvious one, probably there will be many we can’t envisage yet.
  • Ability to change everything: Universal robots productify services. Productification of services has been a theme since the industrial revolution, but with universal robots it will really take off. That will change everything.

When could it happen? Usually, new inventions take 10-20 years to move from the lab to the store. Given the current kinetic capabilities of prototypes like Honda Asimo and Boston Dynamics Big Dog, and advances in prosthetics technology, I’d say robots will be mass-market products within that time frame. 20 years from now they will also be desperately needed. Within 10-20 years many rich countries will begin to feel the effects of low birth rates. Japan’s population is already shrinking now. In fact, most rich countries would have shrinking populations if they didn’t have immigration (the United States is an exception). In 20 years time nearly all baby-boomers will have retired but most will still be alive. The largest and wealthiest generation ever will have stopped producing but will still be consumers. And what will they buy? Health care and personal services, both personnel-intensive businesses. If economic growth in the developing countries continues to be higher than in rich countries migration to rich countries will probably slow down as there are more opportunities at home. So the rich countries will have a labor shortage. The stage will be set for personal robots.

But wouldn’t you have to solve “strong AI”/AGI to build a useful assistant? I don’t think so. We don’t expect to have a conversation about Shakespeare sonnets with our washing machine or Roomba. Turning knobs and pushing buttons is an acceptable user interface for specialized machines, that doesn’t have to change just because the robot is universal. Of course an assistant would have to solve much messier problems, like walking around in a cluttered home and being able to distinguish between clean and dirty socks, but if insects with tiny little brains can navigate and solve problems in the natural world we should be able to build machines with similar capabilities, especially if they get to cheat a little with tagged objects.

So robotics is not a bad career choice, but remember that when there’s a gold rush it’s not the miners who get rich, it’s the people selling shovels.

Monday, May 05, 2008

I'm number 33 on Computer Sweden's list of the top developers in Sweden: http://computersweden.idg.se/2.2683/1.159414.
It's nice to be recognized, but also a bit embarassing since I know many developers who are better than me but aren't on the list at all.
Ivar Jacobson is number 1 on the list. I like his quote: "If you're one of Sweden's top developers, you're one of the world's top developers." :)



Wednesday, March 19, 2008

Emulating the Internet

Many system are developed on LANs but deployed over the Internet. Inevitably, this leads to performance and reliability surprises. These surprises can be easily avoided if you run Linux, since the Linux kernel has built-in features for limiting bandwidth, introducing latency, packet losses, duplication, etc. Google netem and you will find the information you need to get started.

Labels: , ,

Tuesday, May 29, 2007

Programming in our parallel future

The future of computing is parallel, or “multi core” as it’s currently called. In 2005 dual core processors became available, 2007 it’s quad core, presumably we’ll see octa core machines in 2008. Intel has shown a prototype 80-core processor, they expect to ship such processors in 2011. In 10 years time a typical computer will probably have hundreds of cores. Many applications have a 10-year life span, so how will you ensure that the system you are working on now will be able to utilize the computer hardware of the future? Let’s examine some ways to introduce opportunities for parallelism into your system.

The sun always shines
The first method is that of the lazy optimist: someone else will fix it. Improved compilers, library implementations, runtime environments or RDBMSs will take care of the problem and you’ll just keep coding as usual. I wouldn’t bet on it. More specifically I wouldn’t bet my company, or my client’s company, on it.

Is the old school the new school?
A common architecture in Unix applications (and Windows applications designed by Unix people) is that of Pipes and Filters. The application consists of a number of processes communicating through pipes or sockets. There is a natural parallelism here: each process could run in parallel on a its own core. There can also be a nice correspondance between the problem domain and the executing system, often each process represents a domain object. In practice there is rarely more than a dozen processes, and most of the work is often performed by two or three of the processes, so the actual speedup when running on a multicore system is not linear. After 2008 or so your system won’t be able to take advantage of new hardware, it will continue to run at the same speed on newer nominally faster computers. One trick to speed up the system is to have several instances of the critical processes but if they weren’t designed for that it will probably be a lot of work to make it possible.

Windows style
A long time ago Windows wasn’t very process-friendly and the pipe implementation wasn’t very good either, so Pipes and Filters was not a good fit for the Windows platform. Dynamic GUIs that didn’t freeze up were important however. So instead of partitioning applications into processes, there was only one process with several threads of execution. In theory there is not much difference between a process and a thread; a process has its own adress space while a thread uses someone elses adress space. In practice they are used quite differently. Threads are more often transient: created and destroyed dynamically. Threaded designs usually make no attempt to map well on to the domain as the processes of a Pipes and Filter system, the threads are considered implementation details (on the other hand, this can be a source of strength: it means that we can have lots of threads). Also, threads are a constant source of headaches in a way that processes aren’t. One problem is that of synchronization. Usually a lack of synchronization in some unusual situation which makes the system crash, sometimes too much synchronization which makes the system freeze. In a Pipes and Filters system you get this synchronization for “free” from the operating system. A practical tip here: multithreaded applications absolutely must be developed on multicore machines. I’ve worked on no less than three different multithreaded systems developed on single-core machines. When deployed on multicore machines for performance reasons they all promptly crashed. The applications were under-synchronized, which wasn’t obvious on a single-core machine because the thread scheduler introduced some implicit synchronization: the threads were scheduled in the same order almost every time. On the multicore machines the scheduling became non-deterministic and things fell apart. This brings us to the big problem with threads: they’re just too hard. These systems were carefully designed and reviewed by very smart people. Ad-hoc thread designs require an inordinate amount of work the be successful, so to be productive we need structured approaches to using threads. Fortunately, there are at least two threading patterns that scale well and isolate the application programmer from the plumbing. It’s no accident that both are in use at Google, a company that relies on massive clusters of standard PCs. MapReduce (in its current form) originated at Google, distributed event systems have been independently invented elsewhere.

Distributed event systems
In a distributed event system, the entire system consists of event handlers. An event handler may generate a number of new events, each triggering an event handler. The potential for parallelism arises when a large number of events may be active in the system simultaneously, their handlers executing on different cores. This paradigm frees the application programmer from concerns about threads, synchronization and which handler should run on which core, but someone has to implement an event distribution framework which takes care of that efficiently. Such frameworks have been developed inhouse at Google and other companies, but as far as I know none are available commercially or in open source.

MapReduce
Anyone who has programmed in Lisp or other functional languages should be familiar with Map and Reduce. Map applies a function to each element of a collection, returning a new collection containing the results of the applications. An example could be applying the uppercase function to a string, returning a new string of the same letters in UPPERCASE. Reduce also applies a function to the elements of a collection, returning a single value. The canonical example is applying ‘+’ to a list of numbers, returning the sum of the numbers. So why is this interesting?

  1. It turns out that many algorithms can be expressed, elegantly, in terms of Map and Reduce (MapReduce).
  2. MapReduce can be efficiently implemented on a wide variety of multicore hardware.
  3. The application programmer using MapReduce does not have to worry much about concurrency, threads or synchronization. Leave that to the übergeeks implementing MapReduce.
In addition the implementations used inside Google, fortunately there are open source implementations of MapReduce available in Java and Ruby.

Conclusion
It is possible to design systems now that will utilize both current and future hardware efficiently, and still be understandable and maintainable.

Labels: , , , ,

Friday, January 19, 2007

P2P banking and an application to student loans

The entertainment industry was the first to experience the power of the P2P idea. Telecom may be next (http://www.fon.com). After that I believe banks will be the next victim, or the next opportunity if you are an entrepreneur. P2P banking is actually already here, check out http://www.zopa.com. Another example (which isn't really banking, but almost) is http://www.kiva.org.

One interesting application of P2P banking would be student loans. In Sweden we have an expensive government agency handling applications for student loans, determining who is worthy, making payments, etc. We also have quite a lot of unemployed university graduates. P2P student loans would solve two problems: get rid of an unnecessary government agency and guide students towards subjects that will enable them to get a job. The mechanism is simple: lenders will obviously prefer students studying marketable subjects, lowering the interest rate for those students. Students of anthropology would have to pay higher rates, since the risk of unemployment (and the lender not getting the money back) is greater. Perhaps no one would be interested in lending to them at all if the risk of unemployment is perceived as to great and the prospective student would be forced to consider another subject.
The invisible hand in action.

Labels: , , , ,

Tuesday, April 25, 2006

We need an MBA for developers

An ambitious business person often gets an MBA, a Masters in Business Administration, after a few years of management experience. An MBA usually consists of two years of half-time study in parallel with your ordinary day job. If you go to a good school it’s ridiculously expensive. A lot of the value of the education comes from sharing experiences with your classmates and the personal network you gain.
What does an ambitious developer do? We go to conferences, courses of a few days, read books. A lot of it is vendor-specific. There are web sites where we can hang out and discuss our profession and of course you learn a lot from doing your job. But there is, to the best of my knowledege, no 2-year academic education for practising developers. This is even more remarkable when you consider how quickly our field is developing. The stuff you learned in college is outdated if you’ve been working for even 5 years.
There have been attempts; Richard Gabriel has proposed an MFA in Software Development inspired by his studies in poetry (http://www.dreamsongs.com/MFASoftware.html). It didn’t take off however. I think the reaon is that, unlike an MBA, it wasn’t ridiculously expensive. Price is often mistaken for value.

Sunday, March 26, 2006

Is XML the Whitworth thread of our time?
In the 19th century manufacturing was revolutionized when measuring systems and machine elements were standardized. The Whitworth screw thread was one of the most important. It allowed parts from different manufacturing batches, different factories and even different companies to interconnect.
XML promises to do something similar for data, and it's badly needed. Will XML succeed? In a way it already has. When choosing a file format for a new system today XML is the obvious choice. But data doesn't just have form (syntax), it also has meaning (semantics). If two systems are to exchange data they need to agree on both syntax and semantics. XML doesn't specify semantics, a much harder problem than syntax.
Should XML be the default choice for file formats? Probably, but give it some thought first. XML is quite verbose (bad for performance) and not very suitable for human editing without an XML editor. You may think that noone should edit your system's precious files, but in practice there is often some power user who can't resist the temptation to tweak the system by hand-editing files. You could try to make this impossible of course, but tweakability is a good thing and makes your customers come back. So it may actually be a good idea to have configuration files which are easily editable in Notepad. That rules out XML and suggests something like an old-fashoined ini file.
If performance is very important you should consider binary formats, unless easy portability across big/little-endian processors is even more important.

Wednesday, February 22, 2006

Do you really need that RDBMS?
I think relational databases are used in more systems than they should be. In many systems the database is the very essence of the system, without a database it wouldn't make any sense. This isn't true for all systems, though. There are classes of systems which don't primarily exist to retrieve, process and store data in a database. Real-time systems which control hardware, games, communications software and graphics software are some examples. Often these systems still need persistent storage, for example to read configuration data and to write usage statistics. But does that storage have to be a relational database? The strengths of relational databases (flexible query language, security, ...) are not of much use in these applications but we still have to pay the costs: performance overhead, installation/administration issues and perhaps even financial costs.
So what to use instead? Files!
Text files for easy debugging, integration and portability or binary files for performance.
Should these files be XML files? Wait for the exciting answer in a future blog entry...

Monday, February 20, 2006

Security vs usability
If you go for a walk or a drive through central Stockholm with a WiFi device in your pocket, you'll discover many wireless networks called "NETGEAR", "linkys", "Apple Network" or other default names, and no security turned on. Not having to worry about passwords is convenient when you set up you home network, it makes the system more usable, but it's obviously not secure.
To an architect, security and usability are examples of quality attributes, non-functional aspects that need to be considered when designing a system. You usually can't have everything, trade-offs have to be made.
In the case of home networking, the architect seems to have decided that usability is more important than security. But who would this architect be?
The author(s) of IEEE 802.11b?
The architect of the access point?
The architect of the connecting device, usually from a different vendor than the access point?

None of the above have architected the actual system you are using, since the system of interest consists of both access point and device. The authors of the standard could have mandated a solution, but didn't as far as I know. So the result we're seeing is a kind of emergent behavior.
Emergent behavior is a powerful concept, but in most systems you want more control over the result.
So if you want to avoid surprising emergent behaviors in the systems you develop, make sure each system (and a system can consist of systems which consist of systems, etc) has an architect who makes conscious trade-offs between quality attributes based on collected requirements.

Thursday, February 16, 2006

More on Functors

Yesterday I wrote about Functors. The power of Functors comes not only from being able to store and call them, we can do that with functions pointers, but from the fact that they are function objects instantiated from classes. Classes can use encapsulation and inheritance, which isn't available with function pointers. Functor class templates allow further generalization.

In Lisp we can do even more interesting things with functions, in particular with Lisp's macro facility. A macro in Lisp is like a little program which writes a program, a very powerful concept. Lisp macros have very little in common with C/C++ macros.
Anonymous functions ("lambdas") and passing functions as arguments to other functions are common techniques in Lisp.

Once you are comfortable using these concepts, treating functions as data, you'll have a powerful tool available to make your designs more elegant and flexible.

Wednesday, February 15, 2006

Functions and citizenship
In traditional structured languages like Pascal, functions and data are distinct. This is often inconvenient, so in C we got functions pointers and in C# we have delegates. These constructs go some way towards granting functions citizenship, in that we can treat functions as data in some limited ways.
In C++ there is another opportunity: Functors. A Functor is something that can be used as a function. Since C++ allows overloading operator(), this includes objects. Any class which overloads operator() is a Functor, since objects of that class can be called with function syntax.
Representing functions as objects opens up possibilities to store functions, pass them as arguments, etc. It also allows us to do something even more interesting: composing and sequencing Functors with generic algorithms where you pass in the concrete functions/Functors to be called.

Tuesday, February 14, 2006

Singleton considered harmful
If you've worked on an object-oriented system you've probably encountered singleton classes, classes which can have only one instance. Sometimes also known as "Highlander classes", as in "there can be only one".
Singletons are the gotos of data and should be avoided just like goto. The "convenience" of the standard singleton pattern, that you can acess them from anywhere within your code, breaks locality just like goto does. Worse actually, since goto usually can jump only within a function.

One particularly insidious way to create a kind of singleton is available in C++: the static variable declared in a function. This kind of variable will be initialized only the first time the function is called and all instances of the class will share the same variable instance. This is nasty because it's hidden: there is no hint in the class declaration what's going on and even a casual inspection of the implementation may not reveal that subtle static.

So how should you avoid unnecessary singletons? The problem you're trying to solve with a singleton is often to make some kind of global state accessible. Pass around references to the state object instead, preferrably as an argument in the constructor. This also has the benefit of making it possible to supply an object of another class, conforming to the same interface; e.g. when you want to isolate a part of the system for testing you can pass a stub object (always returning the same answers) instead of the real live object which accesses a database or the internet or whatever.
Avoiding singletons is just one aspect of avoiding to be concrete. Using interfaces and factories rather than concrete classes and instances makes your system flexible and testable; which in the end translates to profitable.

Monday, February 13, 2006

The architect's tool chest
Yesterday I wrote about design decisions and cultures, and today I'll continue on a similar theme.
I've worked on at least four systems having large sets of parameters; i.e. values that depend on each other. One common reason to have a set of interdependent parameters may be that you want your system to be configurable to accomodate the needs of different customers, another may be that your system measures a number of values and computes other values from combinations of them. In each of these systems the solution to this problem was an object-oriented framework that holds the parameters and propagates value changes, triggering the computations of new values, etc.
This is a perfectly reasonable solution. In each case these frameworks were redesigned before they felt right. Nothing wrong with that, learning from experience is fine. These systems are all fairly successful. So, what is there to write about?
Well, there is another approach to the parameter problem: Constraint Logic Programming. Never heard of it? Neither had the designers of the parameter frameworks I mentioned. CLP has never been very fashionable and it's not backed by any major vendor, but it's been around for almost 20 years.
After evaluating the pros and cons of an OO framework and CLP, you might still go with the OO solution, but if you haven't even heard of CLP your range of choices is obviously more limited. An architect who wants to deliver business value needs to have a large tool chest and not be limited to what's in fashion at the moment or what the biggest vendors are pushing.

For an introduction to CLP, you can visit this site: http://clip.dia.fi.upm.es/~vocal/public_info/seminar_notes/node6.html

Sunday, February 12, 2006

The accidental language designer

When I studied Computer Science at Uppsala University I was required to take a course in Program and Machine Semantics. A very theoretical course and during a particularly difficult lecture our professor tried to motivate us by saying: "In your professional lives you will all design programming languages. To make them successful you'll need to understand the content of this course."
I thought he was nuts. Language design was the domain of geniuses and committees. I didn't feel like a genius and I had no desire to be on a standardization committe, so what did this have to do with me?
On my first job, I began to understand what he meant. The project was to build a test engine for a communication protocol. Tests were to be specified in a simple language that test engineers without a programming background could use, and guess what? We had to come up with the language. We threw together something that seemed reasonable, the lectures on formal semantics seemed far away. Our language turned out to be difficult to use, in situations we hadn't anticipated noone knew what would happen. It didn't have any defined semantics. The system testers eventually learned what worked and what didn't, and used working code as templates for new code. For this problem I think a small language probably was a good idea, but we should have been more careful specifying it.

At another company, a colleague was given a very open-ended optimization problem. He was a great fan of Lisp (so am I, by the way), but company policy mandated C++ as the development language. My colleague despised C++ and object-orientation, he thought it was to "strict". He solved his dilemma by implementing his own "Lisp" in C++ (and then I'm insulting Lisp): everything was represented as arrays of arrays of chars, and all functions worked on arrays of arrays of chars. And, of course, you could also represent code the same way and there was an eval function to run code.
Very flexible. Way too flexible actually, my colleague was the only one who could understand the resulting system. Designing a language is not always the right idea.

Another example, from a third company: a system needs to handle workflows. The workflows are to be specified by domain experts. Now wiser from experience, I suggest that we need to define a language to express these workflows in and document what the constructs of the language mean. But the culture in this company makes this idea impossible. The idea that you can design your own language is just too far-fetched, somewhat like my own reaction many years ago. "The domain experts are supposed to do data entry, not programming!". With time, it turned out the data got more and more complex, and in the end became a sort of language anyway, albeit an odd and undefined one.
Defining a language from the beginning would have been a great idea in this cse, but the organization wasn't ready for it.

So, Professor Barklund, wherever you are these days (Microsoft last I heard): you were right. It's no accident that I have come across language design issues in many of the systems I've worked on: software engineers design programming languages all the time and it's important to understand the semantics of programming languages.
Even more important is to understand when to design a language and when not to. That's harder to teach in a university course, that kind of judgement requires good taste. While an education can be a great help, experience in building systems is what really develops your taste. Design decisions aren't all technical either, the culture you work in is also important. A brilliant solution (in your own not so humble opinion) that your co-workers won't be comfortable with is not a good solution.
Hi and welcome to my blog!
I'm Jan Mattsson, a software architect and developer in Stockholm, Sweden. I also have some management responsibilities in the consultancy company in which I'm a partner; IT-Huset (www.it-huset.se).
My interests are broad, and I'll be writing on technical topics as well as organization and business aspects of software development.