Programming Languages are PC OSs circa 1986 September 20, 2008 | 10:17 am

Something has recently occurred to me as I was considering the proposition (put forward by the Python programmers and, I think, correct) that a programming language is also a User Interface. There is a fairly good analogy between the state of programming languages today, and the state of the operating systems for personal computers circa 1986 or 1988. Ignore mainframes and workstations for the moment, and just focus on the PC world.

The functional programming languages- Haskell, Ocaml, F#, Scala, etc- are Unix. Very solid foundation, terrible user interface (anyone remember Motif? That’s what passed for a Unix UI in 1986, and much later). Widely talked about in the PC world, but (except for a few brave and crazy souls) effectively not used. Also, very popular with the academic crowd.

The OO Scripting languages- Ruby, Python, Groovy, etc.- are the Macintosh. A beautiful user experience, no doubt- but underneath the hood there is no there there. Also, very popular with the beret wearing crowd.

Java and C#, then, are Microsoft DOS, or maybe Windows 3.1- the structural underpinnings of the Macintosh, except not as good, with the user interface of Unix, except not as good. And with the user share to dwarf both into utter insignificance, due to wide spread business adoption. The choice of all who wear suits.

Flamewar: Activate!

Seriously, looking back at the unix vr.s mac debates from that era (that I participated in- on the Unix side, naturally), neither side really “lost”. What happened was not that we were forced into choosing an OS with good fundamentals and lousy UI, or lousy fundamentals with a good UI, it’s that the lousy UI OSs developed good UIs (KDE, Gnome, Windows 95/98), and the lousy fundamentals OSs developed good fundamentals (Mac adopting Unix, Microsoft adopting NT).

Let me develop this analogy further, to explain what I mean. Functional programming languages really focus on, and really excel at, the “core” of programs- those places where it’s the program talking to itself, where typed data structures, algorithms, functions, modules, and internal (written explicitly for this program) code dominate. Dynamic languages really focus on, and really excel at, the “edge” of programs- where the program is interacting with the outside world- the user, other programs, the file system, etc. Here, strings, processes, files, UIs, shared libraries, etc. dominate.

It is wrong to say that the “core” or “edge” of programs are more important- different programs are more “core” or more “edge”, but the truth is all programs are both “core” and “edge”. Just like it’s wrong to say that the UI or the kernel is more important for an OS- like of a good kernel lead to the Mac’s inability to walk and chew gum (aka multitask) at the same time, while the like of a good UI meant that people are still loathe to try Unix, even decades later.

Likewise, the lack of a good core means that doing large projects, or maintaining code, in languages like Ruby or Python, is very painful. On the other hand, doing any sort of process management or scripting in Ocaml is also painful. Note that getting the core right in a language is about as hard (although in a different way) as getting the edge right.

These different strengths show up in the different sorts of programs each group “prefers” to solve. Functional programmers tend to the sorts of programs that have a lot of “core” but very little “edge”- the classic example here is compilers. Meanwhile, dynamic languages tend towards programs that are lots of “edge” but very little “core”- web sites, scripting, that sort of thing.

It’s not shocking, today, to believe that you can do both in the OS- have a good UI and a good kernel. Mac people no longer feel the need to justify the lack of virtual memory protection, for example, so they no longer claim this is not needed. And Unix people no longer need to defend command line interfaces, and questioning the need for GUIs (or claiming that Motif is “good enough, if you really need to”). There is a difference between justifying or defending an unfortunate feature, and promoting a good feature.

The difference is the common, known-bad, ground- DOS in terms of OSs, Java/C# in terms of languages. Everyone used DOS, whether they wanted to or not- so the Unix people had experience with an OS that had a lousy kernel, and thus could compare the with (unix) vr.s the without (DOS). What they couldn’t do is compare the quality of UIs, as every OS they used had lousy UIs. Likewise, the Mac people could compare OSs with good UIs (mac) and bad UIs (dos), but all the OSs they use have lousy kernels, so they can’t really judge what a good kernel is worth.

If, as the OS analogy and common sense suggests, the answer is to do both well, more dialog is needed. But also a clarity in the dialog. Beware of phrases like “you don’t really need” or “you don’t care about”, negative phrases. You might not need, or care about, something here and now, but that may just be because you’ve never encountered the other side. Instead, focus on the “you really need” or “you care about”, positive phrases. At some point in the near future I need to finish that post I’ve been working on, on the things Ocaml did right, the things you do care about and do need (short list for the curious: strong static typing, purely functional data structures, modules and functors, great GC, native mode compilation). But at the moment I’m more interested in the other side- what are the things that Ruby, Python, Groovy, etc. do right?

The floor is open.

  • http://weblog.raganwald.com Reginald Braithwaite

    I think your statement about Macintosh is true of MacOS circa 1986. Today you have the Macintosh UI combined with the Unix Kernal in OS X.

    What (is the/will be the) programming language equivalent, the language with the UI and the kernel?

  • http://web.gmlk.net gmlk

    On which side would you put Common Lisp or Smalltalk?

  • JS

    The secret to the success of a scripting language like Ruby or Python is fairly straightforward to explain. Firstly, the language cores are simple and have a number of useful features that work well together. Secondly, object-oriented programming is supported by default unlike older scripting technologies such as shell scripts or Perl. Thirdly, these languages have integrated a wide number of other libraries with their cores in a way that is consistent with their language cores.

    This also has had the effect of ‘harmonizing and standardizing’ how those libraries are used to an extent. The magic that makes this last part possible is SWIG: http://www.swig.org/ Modern scripting languages get to benefit from providing a thin, consistent wrapper around thousands of man-years worth of work done by C and C++ programmers. They also benefit from the mistakes of several generations of older scripting technologies – first, the UNIX toolset, secondly, the UNIX shell scripting languages, and later the Perl programming language. It’s not so much that these older tools are bad. Instead it’s simply that the modern scripting languages have benefitted from near 50 years worth of experience with command line interfaces and programming language design.

  • Fred Blasdel

    You started off bad (up to “flamewar activate!”), but turned it around quickly and ended up with a surprisingly good argument (you even managed to avoid using a car analogy!)

    I think Haskell actually has a really good interface, they just made a huge mistake in using the keyword Monad — it led directly to imposing psychological framing, misleading definitions, and misguided tutorials.

    I also wouldn’t put C# in the same dunce chair as Java (maybe just the back of the class) — it doesn’t have the same extreme conservatism, and it has a surprising amount of strong functional stuff in it. It’s what Java could have been had Sun not fucked themselves so badly in the late nineties, and a much better language than the MS sharecroppers and ‘enterprise’ folks normally get to use.

    For the dynamic languages you left out Objective-C, which is arguably one of the larger current success stories for dynamism. There’s Smalltalk too, but I don’t think that it will ever make a comeback, despite Seaside. And Erlang, not that it really makes sense for the little guy.

    The unifying thing I see that the dynamic languages have done right is pervasive message-passing semantics — it’s the one underlying thing you gain by doing all those runtime type-checks. They do have a strong type system of sorts, it’s just completely different — “does this object have a handler for this message?”

  • Steve

    “Mac people no longer feel the need to justify the lack of virtual memory protection, for example, so they no longer claim this is not needed.”

    Er, what? Did we ever do this? This has been the #1 complaint on the tongue of Mac users for as long as I can remember using a Mac (1990 or so).

    Is this article about slashdot stereotypes, or actual people in the real world?

    Also, you need to define terms like “dynamic” and “functional” more precisely. What is Lisp? I would say it’s more “dynamic”, yet throughout its history it has excelled at what you call the “core”. So you’ve either got some vocabulary ambiguity, or a really big counterexample that needs explaining.

  • Quagmire of family guy fame

    what those languages did right is that they aren’t statically typed :) static typing forces a homogeneity at every structural border, so it’s not just static typing. it’s static programming. from limitations like that, you’re forced to use generics and polymorphism, and otherwise go out of your way (often structurally) where with dynamic/duck typing it isn’t a problem

    likewise, languages that can’t iterate simulate mutation through function call boundaries. languages that are referentially transparent simulate side effects the same way. those mechanisms have their place, but when you just need simple things, those abstractions are onerous in terms of writing, reading, and understanding the code

    it’s like the languages are designed so that it’s easy for the computer to run them and to enforce “correctness” along a dimension that has nothing to do with your program *working*… and then the designers say “oh wait, i guess we do need some dynamism here…” and they complicate the shit out of their compilers by basically greenspuning dynamism into the language

    people nod their heads when they hear “keep it simple, stupid” but they don’t know what simple means

  • Fred Blasdel

    JS: I would not describe Ruby as having a simple language core — I think Matz tried to dodge way too many hard problems by doing things like creating yet another function type. Guido made a lot more pragmatic decisions, and it shows in Python being a lot simpler.

    I’m also not really fond of SWIG, I prefer approaches like Ctypes in Python, or the nascent MacRuby. There’s also hosting yourself on the same VM where the libraries are: done a lot in Java/.Net so far, but is possible for native libraries too. Microsoft has been doing it on .Net by making clever, non-sucky native interfaces, and Apple is heading in that direction with a lot of their LLVM work.

  • Fred Blasdel

    Quagmire, it doesn’t seem you understand at all what static typing means. What To Know Before Debating Type Systems might be a good start for you.

    Have you used any of the languages [Haskell, Ocaml, F#, Scala] referred to in the post? None of them have any of the problems you ascribe to them.

  • Quagmire of family guy fame

    that’s a good link

    i’ve used F# (close to OCaml) and played with Scala. haven’t touched Haskell, but are you saying it can do iteration without recursion or side effects without monads?

    take for example the result of a pattern matching expression… in F#, all branches of the expression have to return the same type. this constantly gets in my way, and i have to ramify the expression or refactor a ton of crap. this is especially a hindrance when the pat match is the final expression of a function. a quick fix is its so-called “union types”, but to use those just to homogenize the return expression is a very good example of greenspuning dynamism

    hash tables and other structured data in both F# and Scala are retarded to use. you can’t put whatever type of value you want in any spot so you have to use union types or other forementioned tomfoolery. and then there’s LIST<LIST<TUPLE2>>. real nice. i wonder if we could shoehorn emacs’ paren-management powers into angle bracket-management powers. luckily F# is fine here because it rarely needs type tags

    one thing i’m curious about is what we’ll end up with after the Javascript optimization wars are settled giggidy giggidy

  • http://transfixedbutnotdead.com draegtun

    @JS said “… object-oriented programming is supported by default unlike older scripting technologies such as shell scripts or Perl”

    Perl does have OOP by default.

    Also have a look at Moose and you will see that Perl has one of the most advanced OOP frameworks going.

    /I3az/

  • http://transfixedbutnotdead.com draegtun

    moose.perl.org

  • Quagmire of family guy fame

    testing: LIST<LIST<TUPLE2>>

  • http://tristanisacodemonkey.blogspot.com Tristan Juricek

    So, would languages like F# and Scala – functional languages sitting on top of a wide-use base – be more akin to an OS X – great GUI with tons of Unix tools?

    Actually, the reason Scala tweaks my interest is actually similar to the reason I switched to a Mac for my development environment. It’s a big bag of great tools. In my case, JVM + tons of libraries + sweet functional language = happiness. The “tons of libraries” part is actually more important to me.

    Why did I use Windows in the 90′s? Well, growing up, it had more games. It was the software I used. Now, I moved from gaming to predominantly programming, drawing, etc. OS X fills a nice sweet spot, because I have both great creative tools – Painter, Photoshop – plus all the wonderful open source command line utilities.

    I find that language comparisons are usually not as important as “ecosystem” comparisons. It’s like debating the wheels of a car – yes, it’s an important part, especially for gearhead racers. But it’s still only a part.

    The winning ecosystem is the one that ops for being big and inclusive. Apple made gains as the options of what you could use with the system grew. The JVM and CLR environments will probably stay strong, because they’re including more and more. All I have to do to deploy new logic written in scala is compile it and stick the code in the same JAR, with all the other things I’ve done in the past.

    I’d be curious how many other languages are aimed at including a big ecosystem, rather than just finding a new way to communicate.

  • Brian

    Actually, I think Quagmire has a point, albeit I don’t think he expresses it well. So this is what I think Quagmire’s point is-

    it’s about error handling. What does the language do in the face of an error?

    In Ocaml, Haskell, etc., there basically either two choices- either a) make it impossible for the error to occur, or b) explicitly handle the error. In Perl, Python, Ruby, etc., error handling is a lot more laid back.

    Of course, this gets back to my point about the “core” of the program vr.s the “edge” of the program (by the way, these terms I choose were not meant to indicate that one or the other was more important, or better, or whatever, than the other- just that they were different). The number of different error conditions in the core of the program are small, and generally indicate some (more or less serious) programming error. So being anal about error handling is probably a good idea. On the other hand, in the edge of the program, the number of potential error conditions explode.

    Consider creating a socket for someone else to connect to us. First you have to create the socket, which has (at least) 8 different potential failures. Then we have to bind the socket, which has 15 different potential failures. Then we have to listen to the socket- 4 more potential failures . Then we have to accept the connection- another 15 different potential failures. That’s 42 different error conditions we have to handle just to open a dang socket- in the simplest possible way- no calls to fcntl or ioctl to set special socket options, and this isn’t counting reading from (9 potential errors, not counting no data available), writing to (17 different potential errors), or closing (3 different potential errors) that socket.

    There are times and places where I want to be careful, and consider all the different possible errors and either decide explicitly how I want to deal with them, or being able to explicitly prove they can’t happen. But most of the time I don’t care- the vast majority of those errors I was mentioning almost never happen. For example, one of the error conditions that bind can hit is that you pass in something that isn’t a file descriptor (ENOTSOCK). What’s the probability that this error condition will actually be hit? Effectively 0. Especially considering the code that created the socket is one line up in the source file. Forcing the programmer to explicitly handle all these error conditions when he doesn’t need to is just needlessly painful.

    Often, even most of the time, you just don’t care. Just give me a socket, please. And spawn this process, and perform some ad-hoc parsing of this input, and let me get on with the job.

  • Fred Blasdel

    At least for Haskell, the error handling is not like Java (with 42 anally checked exceptions). For high-level cases (is this socket good) things are usually simply encapsulated by Maybe or Either when you use them — there are 8 ways to report errors, but they’re mostly isomorphic to one another, and none require you to handle 42 cases separately.

    What I find nice is that the compiler can warn you about a missing case (if you turn that stuff on), but it doesn’t force you to handle it.

  • Brian

    Actually, error handling is one of my major complaints with Haskell. If you’re working in any other monad (highly likely in Haskell), adding the maybe monad into the mix means now you have two levels of monads- and welcome to the wonderful world of monad combinators. Or you could just explicitly handle the maybe monad all over the place, cluttering up the code yet further. Or you could us exceptions- except that catching exceptions requires you to be in the IO monad.

  • Greg M

    Not a bad analogy. It’s the difference between making it easy to do things right and making it easy to do things without understanding what you’re doing.

    For a reasonably intelligent person, it pays off to go the doing-it-right route once you use an operating system for more than say 100 hours in total, or once you write code that you’re going to use again tomorrow, or that someone else has to read.

    Quagmire, you probably won’t believe this yet, but 99% of the time a single return type restricts you, it’s an indication that you’ve either made a mistake designing your program or you’ve overlooked a F# feature that lets you express what you’re trying to do in a more natural way.

  • http://enfranchisedmind.com/blog/ Robert Fischer

    Groovy, which I’ve given up OCaml evangelism to go evangelize, is a really solid language that gets a few things really “right”.

    1) Optional typing. Although Groovy’s flagship product is Grails, the language works surprisingly well as a drop-in replacement for Java in the “core” parts, because you’ve got static typing if you want it. Meanwhile, at the “edge’ parts (like Grails), you can play fast-and-loose with typing. Note that optional typing also gives you the ability to write nice method overloading, so you don’t have to hand-write the dispatch based on poking and prodding the metaobject of the argument passed in.

    2) Exceedingly clean interface with existing libraries and technologies. Since a Groovy class is-a Java class (not just Java byte code), and since it’s easy to map Groovy’s methods/properties/operators to Java’s methods, it’s exceedingly easy to bounce back and forth between Java and Groovy. For instance, I wrote my GroovyCacheMap for JConch in Java, so I didn’t need a dependency on Groovy — yet it’s built on Groovy’s closure feature.

    There’s always awkwardness and pain at the point of integration between two very different languages — SWIG and JRuby just reduce the pain. In Groovy, the pain is zero, which means that it is exceedingly easy to leverage Java’s technologies.

    3) Meta-object magic that’s sane (by meta-object magic standards, anyway). Unlike Ruby, which allows a class to be opened up and reworked at whim, Groovy structures the meta-object magic via its ExpandoMetaClass and Categories. This enables things like a Groovy Just-in-Time compiler to get real footholds into optimizing the code.

    4) Extreme succinctness in point-free programming. It’s pretty friggin’ awesome to write code like:

      foos.findAll { it.isGood() }.inject([:]) { memo, foo ->
        memo[foo.bar] = foo.baz
        memo 
      }.collect { it.key % 2 ? it.value : it.value+1 }.each { ... }

    While this is one point where I really start to long for implied static typing, the succinctness is even better than in OCaml. It’s extremely readable from a human perspective (“In foos, find all where it is good, then fold into a map of bars onto bazzes, then map to even key values (adding one if necessary), then…”).

    5) Hashes as more-or-less drop-in replacements for objects. Since “foo.bar” on a map becomes “foo[bar]” and “foo.baz(1)” becomes “foo[baz](1)”, you can use hashes to represent small objects. If you want an object with some behavior, you can hand-roll a little pseudoclass to represent the object by building a map up. The stunt is basically the same one as using a record with function references in F#/OCaml, but this power works really well with duck typing, because you can sneak in just the functionality you need.

  • Quagmire of family guy fame

    Brian: that’s a good example. fundamentally i think it’s the fact that striction in general goes against the nature of programming and of the environments that programs live in. skyscrapers are built with flexibility because nature is quick to teach lessons about the lack of dynamism… unfortunately, the lessons nature teaches in the programming world aren’t as vivid, so it’s easy to never learn

  • http://www.linkedin.com/in/robertfischer Robert Fischer

    Quagmire –

    Your e-mail in your post bounced back to me, so I’m posting this here.

    After considering your other (moderated) post carefully, I’ve decided to bounce it because you used foul language and assert a lot of things without backing them up — one of those I’d let slide, but both of them get moderated. If you have some more concrete examples instead of abstract assertions, that’d be good. And not swearing will keep you of my moderation queue.

  • http://hamletdarcy.blogspot.com Hamlet D\’Arcy

    Part of the language User Interface is error handling. Many of these dynamic languages on VMs suffer from what I call “implementation bleedthrough”. In my experience, one of the biggest learning curves of Groovy is learning to make heads or tails of the stack traces. The JVM and Reflection details bleed through the dynamic language so that the programmer needs to continue to be consciously aware of the underlying platform. Another implementation bleed through in Groovy is the weird argument reordering bug Robert found a while back… when explained how the language is implemented in Java you sort of nod and say, OK I understand how it works… but to observe it as a Groovy programmer with no knowledge of how the language or JVM works… then you will claim all day that it was a bug.

    I feel that F# has very little implementation bleedthrough. I know nothing about the CLR and have never needed to know about it. Errors and exceptions don’t seem to have the same level of noise that Groovy’s do. I also don’t need to know about the CLR types. They have all been wrapped in F# types so that there is a single consistent style. Can you imagine programming Groovy without intimate knowledge of the JDK types? OK, so Grails would still be great, but as a scripting language or general language, you’d be hobbled without knowing about the libraries from memory. Plus there is an impedance mismatch when using Java libraries from Groovy. There are simply a lot of edge cases where dynamic tricks don’t work once you execute Java compiled code and they are in a different style. Having programmed in F#s wrapper types, I can say that the programming environment is much better with wrapper types than just SDK integration to other language’s types. (Note, Groovy has a great migration path from Java to Groovy which is only possible with this level of integration. F#s migration path doesn’t look so good).

    And as for the comment, “Firstly, the language cores are simple and have a number of useful features that work well together.” Egads. Functional simplicity with a small number of primitives building larger abstractions is what many functional languages are all about. One of the best teaching books on Scheme has two prerequisites: the reader must be able to read English and perform basic addition and subtraction. And that’s for a book that covers the Scheme language, recursion, and even the halting problem. A small number of primitives with the rest built on top is the way of simplicity. It’s all about reference points, but from where I sit Ruby does not have a small, simple core. Scheme has a small, simple core. F# is somewhere between the too.

    And while you may not like Microsoft, and you may not like Visual Studio… they are trying to address the development environment issue. They are not publishing a language and inviting the community to add support via plugins… they have a team of people working on making the F# IDE better than OCaml and Vi. Good for them.

  • http://hamletdarcy.blogspot.com Hamlet D’Arcy

    Part of the language User Interface is error handling. Many of these dynamic languages on VMs suffer from what I call “implementation bleedthrough”. In my experience, one of the biggest learning curves of Groovy is learning to make heads or tails of the stack traces. The JVM and Reflection details bleed through the dynamic language so that the programmer needs to continue to be consciously aware of the underlying platform. Another implementation bleed through in Groovy is the weird argument reordering bug Robert found a while back… when explained how the language is implemented in Java you sort of nod and say, OK I understand how it works… but to observe it as a Groovy programmer with no knowledge of how the language or JVM works… then you will claim all day that it was a bug.

    I feel that F# has very little implementation bleedthrough. I know nothing about the CLR and have never needed to know about it. Errors and exceptions don’t seem to have the same level of noise that Groovy’s do. I also don’t need to know about the CLR types. They have all been wrapped in F# types so that there is a single consistent style. Can you imagine programming Groovy without intimate knowledge of the JDK types? OK, so Grails would still be great, but as a scripting language or general language, you’d be hobbled without knowing about the libraries from memory. Plus there is an impedance mismatch when using Java libraries from Groovy. There are simply a lot of edge cases where dynamic tricks don’t work once you execute Java compiled code and they are in a different style. Having programmed in F#s wrapper types, I can say that the programming environment is much better with wrapper types than just SDK integration to other language’s types. (Note, Groovy has a great migration path from Java to Groovy which is only possible with this level of integration. F#s migration path doesn’t look so good).

    And as for the comment, “Firstly, the language cores are simple and have a number of useful features that work well together.” Egads. Functional simplicity with a small number of primitives building larger abstractions is what many functional languages are all about. One of the best teaching books on Scheme has two prerequisites: the reader must be able to read English and perform basic addition and subtraction. And that’s for a book that covers the Scheme language, recursion, and even the halting problem. A small number of primitives with the rest built on top is the way of simplicity. It’s all about reference points, but from where I sit Ruby does not have a small, simple core. Scheme has a small, simple core. F# is somewhere between the too.

    And while you may not like Microsoft, and you may not like Visual Studio… they are trying to address the development environment issue. They are not publishing a language and inviting the community to add support via plugins… they have a team of people working on making the F# IDE better than OCaml and Vi. Good for them.

  • Scott Vokes

    Just a note, I think there’s a strong overlap between what you’re calling “edge” languages and what other people call “glue” languages.

  • Quagmire of family guy fame

    Robert: that’s fine, and thanks. and yea i never put my real email on these things. giggidy giggidy

    Hamlet: part of that is probably because F# is static. it shouldn’t have too much trouble intermingling with the CLR. the IDE “issue” is commonly pointed at dynamic languages because the IDE can’t know for sure what type some variables might be. but then if it’s a dynamic language it probably isn’t typeful

    also, Microsoft had been looking for a higher-level static language for a while. some years ago it was willing to support Lisp… with the caveat that the language become strongly/statically typed. you can imagine the number of abstract middle fingers that went up in Lispers’ heads

    of course now that dynamic languages are picking up momentum, Microsoft starts courting them

  • http://www.linkedin.com/in/robertfischer Robert Fischer

    Quagmire:

    It’s true that static languages do some backflips to support dynamic aspects of languages. One of the nice thing about functional languages is that they tend to operate at a higher level, so you say things like “apply this function to each value in that map”, so the actual non-data-structure types become irrelevant. But you do end up with union types and the like, because the user needs to communicate more information to a compiler in a static language than in a dynamic language.

    On the other hand, dynamic languages regularly start to accrue a layer of static typing code. Here’s a classic example:

    def foo(bar) 
      raise "bar cannot be nil!" if bar.nil?
      # ...
    end

    Or, particularly in Ruby, you get lots of code like this:

    def foo(bar)
      bar_s = bar.to_s  # Make sure we have a string
      # ...
    end

    This leads to all kinds of duck typing style APIs being written — “Argument must respond to ‘.to_a’ with an array” and the like.

    (For more on null/nil as type info, see 7 Actually Useful Things You Didn’t Know Static Typing Could Do: An Introduction for the Dynamic Language Enthusiast.)

    So both dynamic and static programming languages end up having pain when they try to play in the other person’s paradigm. This is one of the reasons I was so impressed with OCaml’s type system — the implied static typing seemed to recapture a lot of the dynamism without sacrificing the safety of static typing. In fact, its static typing provides substantially more guaranties than the type systems I was used to (C#, Java), which was refreshing.

    As for MSFT — Microsoft will go where the market is. Their corporate MO is to wait until there is a successful piece of not-terribly-sucky software out there, and then throw their weight behind a rather sucky knock-off. In the programming language arena, they seem to be doing a pretty good job with their knock-offs — C# is a better language than Java, and F# is at least competitive with OCaml.

  • bhurt-aw

    Sigh. I tried. I really tried.

    There is a class of people- not many, but I’ve encountered enough of them to recognize the class- that view (static) type systems as the main, dang near the only, thing- programming languages are type systems with attached syntaxes and evaluation strategies either tacked on as an after thought, or just left as an exercise for the student. These people are unable to see past the type system to any other aspect of the language.

    This thread has caused me to recognize that there is a mirror-image class of people who are equally focused on the type system and equally incapable of seeing beyond it- except that instead of being static typers, they’re dynamic typers. Note that in both cases, it’s all about the type system, and discussion beyond the type system is impossible. Reg, when you wonder why every comparison of functional vr.s other programming languages leads to a (nonproductive) discussion of static typing, remember this.

    Personally, I think there is a heck of a lot more to a successful language than just the type system. Libraries, and the communities that create and maintain them, for example. And what role does “syntactic sugar” play into the mix? One of the things I miss from perl is the ability to easily spawn off simple processes with backticks- `echo foo` or whatever. This isn’t a static-vr.s-dynamic thing, either. On one side you have the Lisp and Scheme hackers, who claim that syntactic sugar gives you cancer of the semicolon, and on the other hand you have the Perl and Ruby and etc., piling on the syntactic sugar. And note that both sides of this debate are dynamically typed. Which way is right? Or is the right way some middle ground, in which case the question become which bits of syntactic sugar have bang for the buck?

    Or, if you can’t get past it, I suppose we can discuss type systems.

  • bhurt-aw

    Scott Vokes: A glue program is just a program with two (or more) edges. Obviously, if the purpose of a program is to deal with two different edges, edge-style programming is going to be of prime importance. Note that you can do glue programming in just about any language- just that it’s way less painful in some languages than in others. A glue language is one, then, where it is not significantly painful to write glue programs in.

    There’s more to being a glue language that just good string and process handling, though, and this is a good point. For example, good glue languages have (relatively) simple run time systems, which make them easy to embed in other programs- Perl, Python, Ruby, Lua, TCL, all are easy to embed. Java and Ocaml, not so much. The Foreign Function Interface (or Native Code Interface, or whatever the language calls how you call code not written in that language) is also important.

    On the other hand, what the “native” environment is, is also becoming fuzzy- we’re starting to get a lot of glue-oriented languages that use the existing (complex) run-time- Groovy and Jython and JRuby on the JVM, VB on .NET, and so on .

    But this also runs directly into Greenspun’s Tenth Rule: Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified bug-ridden slow implementation of half of Common Lisp.

  • Scott Vokes

    bhurt-aw: “A glue program is just a program with two (or more) edges.” Yes, very good point.

    Also, about your syntactic sugar question — I’ve been wondering about this a great deal. Too much SS makes the language noisy and ugly, but a little bit of well-planned SS makes it much, much more straightforward to use. Python’s slicing operator (e.g. someStr[1:-1] -> drop first and last chars) and several things in Lua (e.g. obj:method(arg) -> method(object, arg), table.key -> table["key"]) are good examples of things that don’t just make the language easier to use, but fundamentally change the character of the language: certain operations become so quick to use that you find new idioms with them that simplify the overall language.

    I’ve been working on a toy Lisp-like interpreter in my spare time, specifically to experiment with SS. I think slightly more powerful read macros would make the language quite a bit less off-putting to curious people. An “infixify” RM, for one; something I learned from scheme, ocaml, and forth is that if a language messes with the syntax for basic arithmetic (e.g. ” +. “), it will probably give a terrible first impression, and some people will just drop the language immediately. It may be worth providing some SS for arithmetic, even if it’s irrelevant to the semantics of the greater language.

  • http://enfranchisedmind.com/blog Robert Fischer

    @Scott Vokes

    Excellent point on SS. Here’s a very verbose +1:

    The interchangeable map access/property access aspect of Groovy (indeed, the entire mostly-orthogonal properties/methods approach) has drastically extended the capabilities for mocking and dynamic typing in that language, even compared to other dynamic languages like Ruby. If you’re testing a method that looks like:

       void foo(bar) {
         bar.baz()
         bar.val++
       }

    You can test “bar” with:

    def bar = [val:0]
    def bazCalled = false
    bar.baz = { bazCalled = true }
    obj.foo(bar)
    assertEquals(bar,1)
    assertEquals(bazCalled, true)

    That’s simply not a possible approach in languages without that shortcut.

    Similarly, I first started getting into functional programming (without really knowing what it was) in Perl, because subroutine references and “method templating” were so easy to do.

    my $x = sub { ... }

    Because of that, entirely knew ways of meta-programming structures seemed natural, including having objects that were basically just thinly-veiled subroutine references (see my Regexp::Tr library), and then data structures of thinly-veiled subroutine references (see my Text::Shift library).

  • http://enfranchisedmind.com/blog Robert Fischer

    @Scott Vokes

    The syntactic sugar that Java did — basically proving some low-level operator overloading for integer and floating point arithmetic — seemed to be a nice compromise between general-case operator overloading (evil[1]) and universal operational purity (off-putting). Surprisingly, I’ve heard very few people lamenting the inability to define + for matrices in Java. The one downside is that there comes a point in every journeyman Java developer’s life where they get confused about why 1 / 7.0 * 7 > 1 — the reality is that floating point and integer arithmetic are fundamentally different, so mixing the two is a Devil’s bargain: you don’t put off beginners used to blurring those boundaries, but you put off intermediate developers who suddenly discovered that there’s a lot of dangerous plays. Of course, by the time they’re intermediate developers, you’ve already suckered them into knowing your language pretty well, so they’re vested.

    [1] The biggest evil of general-case operator overloading is that changing foo + bar to bar + foo suddenly drastically changes the definition of “+“. It gets even worse when you consider foo * (bar + baz) vs. foo * bar + foo * baz. Or, y’know, 1 / 7.0 * 7 vs. 7 * 1 / 7.0.

  • Marc Stock

    I used to do C++ programming many years ago. Back then, operator overloading was generally frowned upon because the C++ world went through an era where it was heavily overused (new toy!) and even in cases where it was appropriate, the effects of the overloading were often not what was expected. Method names that clearly indicated the function to be performed became far more preferred. The ‘+’ symbol can mean different things to different people. Add to this the fact that (at least at that time — I don’t know if it’s changed) IDEs didn’t have javadoc like support for operators and it just wasn’t pretty. Operator overloading is great when it’s very clear what the operator should be doing given the context but outside of basic math, those opportunities are rare. That said, using operator overloading can be thought of as sort of a DSL Lite and if the developer community cannot handle that, can it really handle full blown DSLs?

  • http://www.linkedin.com/in/robertfischer Robert Fischer

    The problem with operator overloading is mixing and matching contexts — what “+” means in one place vs. another is context-driven, and so mismatched contexts can cause surprising results. DSLs exist in their own driving context, which should eliminate the surprises.

  • Marc Stock

    “DSLs exist in their own driving context, which should eliminate the surprises.”

    To a point, but it doesn’t fix the problem because when you’re applying an operator to something, you should know the context as well. I don’t think context is the big issue here. It’s interpretation of how said operator should function given that context. Depending on how you design a DSL, it could be subject to many questions of this nature (which is why a lot of people are saying you shouldn’t make a DSL very natural language-like).

  • Hallo,

    I’m not an expert on so many langauges so I don’t want to spread the liddle stuff I know.

    One thing I want do mention about error handling. I’m learning Dylan at the moment and I really like the language. You guy’s sould really take a look at the error handling system in dylan. I have never seen it done in a cooler way but I only know a couple other languages.

    Dylan was designd for big applications so it wanted it to make errorhandling right. What do you think.
    Dylan Conditions or here