bhurt-aw

And now slapstick is endangered… September 12, 2010 | 01:57 pm

Irony died shortly after birth. Satire was critically wounded by a combination of Poe’s Law and the Onion becoming prophetic. But now, we’re starting to lose straight up slapstick:

Dresden, Germany, resident Petra Kujau, 51, was fined $380,000 this week and given a two-year suspended sentence for selling 300 copies of masterpieces by artists like Vincent van Gogh, Franz Marc and Claude Monet. However, she never claimed that her paintings were originals by these great masters. Her crime was to wrongfully market them as copies made by her (supposed) great-uncle: counterfeiter extraordinaire Konrad Kujau, the man behind the Hitler Diaries, arguably the greatest fake of the modern age.

Insert your snarky quip here, ’cause I’ve got nothing…

And Apple is removed from my sh*t list… September 9, 2010 | 09:25 am

Just wanted to post this link, wherein Apple rescinds it’s restrictions on what development tools app developers can use. I hereby withdraw my call to boycott Apple.

So soon we forget August 6, 2010 | 04:44 pm

I need to respond to PZ Meyer’s blog post on this, the anniversary of the Hiroshima bombing.

The first thing to remember is that the main difference between what we did to Hiroshima and Nagasaki and what we did to Tokyo, or what Japan did to China (see “The Rape of Nanking”) or the Philippines, or what we did to Dresden, or what Germany did to Coventry or Leningrad or Moscow, or what Russia did to Berlin, etc., etc. was how cheap it was. One plane. One bomb. Prior to that time it tooks thousands of men to utterly destroy a city and kill all or most of inhabitants. Dresden, for example, took 1,300 heavy bombers, many of whom didn’t make it back. But despite the cost, Tokoyo and Dresden and Nanking and Coventry and Leningrad and all the rest happened anyways. It was a time of horrors.

And maybe Japan would have seen reason, had we nuked a mountaintop instead, say, or an atoll (as PZ Meyer suggested). Or, perhaps not. We can look back now, with out comfortable seat here in the 21st century, and say with perfect hindsight that Japan should have immediately realized the consequences and implications of nuclear weapons. My experience is that people don’t learn that fast. A miscalculation on the Japanese response to a demonstration means either we end up vaporizing more cities or we invade Japan- which would millions of American dead, who knows how many Japanese dead.

Yes, there is blood on America’s hands. Hiroshima and Nagasaki were great crimes. But this is the first thing we forget- that is what war is. A great crime. Innocents get killed, generally in even greater numbers than the guilty are killed. Blood is spilled. It is not noble. It is not heroic. The only times it is remotely justifiable is when the greater crime is to not fight the war. At it’s best, at it’s most righteous, most clean cut, most honorable, it is the decision whether or not to kill hundreds of thousands of people to avoid the possibility of having to kill millions. There is no good here, only varying degrees of bad and worse.

Part of the problem is scale- one person dead is a tragedy, a million dead is a statistic. So narrow the focus down, from the statistic down to the personal again. This is what it means to go to war: There is an eight year old girl. Picture her. Big eyes, dark hair, she loves her doll, her comfy blanket, and sweets. And oh yeah, Mommy and Daddy. She thinks boys are icky, and isn’t sure about school (she’s somewhat shy). War means she’s going to have her guts blown blown out. She’s going to die in excruciating agony, her dying gurgle chocked off by blood. And her father is going to get to hold the bloody hamburger that was his daughter, in the burnt out ruins that were their home.

That is war. Fix that image in your mind. That girl, she lived in Hiroshima. And Tokyo, and Dresden, and London, and Berlin, and Leningrad, and Moscow. And Korea and Viet Nam and Afghanistan and Iraq. In every war humans have ever fought, back to the dawn of time, little innocent eight year old girls died. And boys, and women, and old people, and innocents of every stripe. This is the cost of war, and it’s true nature.

So, on this date, the anniversary of an act of war only significant because of a technological advance, I hear the war drums starting again. When you tell me “we must go war”, this is what you are advocating- you are saying “we must kill that innocent eight year old girl.” This time it is Iran, last time it was Iraq. The only thing changing is which eight year old is killed. I must ask- is the goal we seek worth the cost? Is there no other way to achieve that goal other than ending an eight year old girl’s life in a paroxysm of blood and pain? Sometimes the answer is yes (I think Hiroshima was one of those times). But most of the time, all most all of the time, I find the answer is a definitive no.

“Never again”- I like the sound of that. But “never again” will only become a possibility once we start to remember the true nature of war. By singling out Hiroshima as something exceptional and noteworthy, as opposed to normal for war, he is (unintentionally, I’m sure) encouraging the very amnesia which allows for the crime to be replicated. All we have to do, according to this logic, is not nuke Tehran, and going to war isn’t that big of a crime. No, Mr. Meyers, it is. That eight year old girl, and her father holding her bloody remains, do not care if the bomb was nuclear or conventional. The technology does not matter.

So, by all means let us take this day, August 6th, to remember the true cost of war. And also Feburary 13th (Dresden), and December 13th (Nanking), and…

Steve Yegge is an idiot July 28, 2010 | 06:42 pm

And to think, I used to have respect for the man. Then he goes and posts this pile of fetid dingo kidneys.

I’m going to explain in detail why ditching private (and, by extension, public) is bad. Obviously this needs to be spelled out, because a lot of programmers- including Steve Yegge – don’t get it.

Read the rest of this entry »

Dear deficit “hawks”: bite me July 26, 2010 | 09:18 am

Here’s the problem I have with the recent furor over the deficit and the debt: my long term memory still works. See, I remember George W. Bush. I know it was a long time ago (eighteen months), and for those with the memories of fruit flies, let me remind you: the debt philosophy of the Bush administration, as articulated by vice president Dick Cheney, was “Reagan proved deficits don’t matter”. And they lived by this philosophy. The debt in 2001 (Bush’s first budget) stood at $5.8 trillion- by 2009 (Obama’s first budget), it stood at $11.9 trillion, and increase of over $6 trillion. Over 8 years. That’s an average of $750 billion in new debt every year W was president. And, outside of the liberal blogsphere and Paul Krugman, I don’t remember anyone saying a god damned thing about the deficit. For eight years.

But then, some time in 2009, something changed. The deficit “hawks”, who had spent eight years silent, suddenly awoke to the massive danger the debt posed, and took up their abandoned positions yet again. Now, all of a sudden, we’re treated to daily broadsides on the danger of the debt, and the need to eliminate the deficit immediately. Which leads me to ask: why now?

Read the rest of this entry »

Today’s Thought July 8, 2010 | 09:13 pm

Great tools are what you get when your language sucks.

The tools are developed to work around short comings in the language itself. If the language didn’t have those short comings, you wouldn’t need the tools, and they wouldn’t get developed. The best language would be just fine with just notepad and a compiler. So bragging about how great the tools are for your language is bragging about how your language sucks, and needs significant amounts of help to make development in it tolerable.

Today’s busines idea someone else should do June 25, 2010 | 03:06 pm

So, I recently reread Faster than the speed of light. By the way, I highly recommend this book (even if it’s central theory is wrong), if for no other reason than for it’s explanations of relativity and cosmology. But anyways, a large hunk of the book is Joao’s fight with the scientific publishing world to get his theory even published. The thought that just occurred to me was that someone needs to apply the hard lessons learned in internet social media, especially aggregation sites like Slashdot, Reddit, and Digg, to scientific publishing.

The idea goes like this: a working definition of what, say, physics is, is that it’s what physicists do. It’s not perfect, but it works, and it’s much easier to measure. So you have this site, and you can get an account if you’re a physics researcher at an accredited research institution (this is answering the question “who is a physicist” by plugging into the existing infrastructure of defining who is a physicist- accreditation). Having an account means you can do three things- upload papers, vote on papers, and comment on papers. Everyone, including the public, can see all the papers that have been uploaded, and their comments, but only accredited physicists get to upload papers (if you’re not an accredited physicist, you have to get at least one accredited physicist to upload your paper- the papers the physicists upload don’t have to be theirs- they’re just claiming that they consider the papers to be physics). And they’re time stamped so that priority is preserved.

Once the paper is uploaded, physicists can vote for them. Each physicist gets only one vote per paper, and every month the N papers with the top number of votes which haven’t already been published get published in the paper journal. And yes, the votes accumulate, so a paper that’s been slowly garnering a vote a month for the last five years (and thus has 60 votes) beats out the paper that got uploaded yesterday and has already garnered 59 votes (although that paper will likely make it next month). There might be an English major on staff to make sure everything is spelled correctly and is grammatical, but that’s optional.

I might allow down votes (allowing physicists to say “this paper isn’t physics, or isn’t worthy of being published”), but they would only count in the case of a tie (if two papers both have 100 up votes, and only one can be published, publish the one that has fewer down votes). You might also have multiple journals, and allow the physicists to vote on which journal a paper should be in (“This paper really belongs in the Journal of String Theory, not in the Journal of Loop Quantum Gravity” for example).

I would make the list of who voted, both for and against, various papers public, for two reasons. First of all, this helps discourage people from engaging in personal vendettas, or at least makes it more obvious that they do, and second, this allows the heavy weights to influence the selection process (“if both Leonard Susskind and Stephen Hawking are voting for this paper, that’s good enough for me!”).

This is a couple of weeks of work for a competent Ruby on Rails hacker. The biggest coding problem will be handling the multitude of different file formats the papers will come in (word, latex, etc) and making them look more or less uniform. The real problem will be marketing to the physics community itself. Physical Review D is an important journal to the physics community because a lot of important papers are published there, because a lot of important papers are submitted there, because it’s an important journal. Everyone buys IBM because everyone buys IBM. Except everyone doesn’t buy IBM anymore- change can happen.

The difference between an eternal noob and the not-yet-an-expert June 23, 2010 | 10:16 am

me: “I’ve been lost here before!”

my friend: “Ah, so you know where you are?”

me: “No, I told you- I was lost then too. I still don’t know where we are, just that I’ve been here before.

When I was 17, my parents moved my family from Bettendorf, Iowa, to Chelmsford, Mass. Among many other interesting aspects of that move, it taught me one important skill that has stood me in good stead throughout my life: the ability to get lost. And the varying degrees of being lost, from “I’m not sure exactly where I am, but I know which neighborhood I’m in and the direction I want to head” to “I’m not sure which state I’m in, and since I’m in the land of the big square states this could be a problem”- and yes, I have been the latter (unsure if I was in Nevada, Utah, California, New Mexico, or Arizona). And how to find my way back again. And, most importantly- why to get lost. And the why is the most important. Most people look at you funny if you state your intention is to go get lost. But you can’t learn something new unless you’re willing to leave the familiar and well known.

I thought about this when I was reading Thomas Petersen’s blog post on why your mom sucks at computers (for the record, my mom sucks at computers only compared to her professional-programmer husband and three professional-programmer children). What struck me was his mother’s unwillingness to get lost, and incapability of dealing with it when she did get lost. This is what differentiates the eternal noob- someone who will never know their way around- and the person who will be a native (aka expert), and just isn’t yet- the willingness and ability to get lost, and wander around finding what is out there. This is a skill, and it can be learned (or relearned, as it is- we are all born explorers). I certainly didn’t have it as a skill when we moved to Chelmsford, I learned it. Your mom can learn it too. Maybe you can as well.

Apple is just Microsoft with better marketing April 10, 2010 | 03:47 pm

So, I’m assuming that if you read this blog, you’ve heard about Apple’s new licensing restriction- the one wherein you are now only allowed to use C++, Objective-C, or Javascript to program on the iPhone. If you haven’t, here are some links, or just get out from under your rock and glace at the programming reddit or hacker news.

The consensus of the blog sphere is that this new clause is aimed eliminating the ability for people to build abstract environments- especially portable abstract environments- on top of the iPhone. Adobe and flash is mentioned a lot, and Google’s Android phone is mentioned often as well. But whom I haven’t seen mentioned is Microsoft.

You see, this is exactly why Microsoft decided to “cut off the air supply” of Netscape. Netscape was developing what would become Javascript, which would allow developers to write apps which would be portable across multiple operating systems- threatening Microsoft’s dominance in the application market. This causes a freak-out among the upper management of Microsoft, which lead Microsoft to making something questionable moves, which lead to the anti-trust suit.

And as scummy as Microsoft was, what they did isn’t as bad as what Apple just did. Yeah, they used their monopoly power illegally to ensure Microsoft Windows was on every new PC, and then made every copy of Windows had a pre-installed copy of Internet Explorer. But they didn’t just change the licensing agreement for Windows to make Netscape illegal. I mean, imagine if they had done what Apple just did? Just change the licensing agreement for Windows to make it illegal to use anything other than C, C++, or Visual Basic to develop programs for Windows? Among other things, I think we’d now have a number of “Baby Bills” kicking around.

That is one difference between Apple and Microsoft- Microsoft was (still is) a monopoly, while Apple isn’t. Even in the smart phone market. Even in the smart phone applications market. With Microsoft, people felt (rightly or wrongly) that there wasn’t anywhere else to go. As bad as Microsoft was, there wasn’t really anywhere else to go, or so people thought. Once there was, people abandoned the Windows platform in droves. No, really- aside from games, how many new applications have been developed on the desktop in the last 15 years? Application development shifted, basically in it’s entirety, to the web. With Apple, there is somewhere else to go- Google’s Android. Pulling monopoly stunts like this only works if you really are a monopoly- if you’re not, you’ll just bring hellfire and brimstone down around your head.

I hope so, at least. Because here’s the aspect of this whole affair that most concerns me. In attempting to harm Adobe and Google, Apple is hurting the whole industry, by putting the breaks on language development. No language more advanced than the three listed are allowed. No Haskell. No Ocaml. No Clojure. No Lisp. No Ruby. No Python. No Groovy. No Scala. No F#. Heck, no Java or C#. The last 15-20 years of language design, lessons learned and advancements made, have been thrown out and outlawed. If this idea catches on, that this is how you lock developers into your API, then the whole industry will get stuck. If this clause had been written fifteen years ago, the languages then would have been C, Fortran, and Cobol- and how would feel about being required to program in those languages today? Well, that’s how you’re going to fell about C++ and Objective C ten or fifteen years from now.

And don’t give me that shit about Apple being selective in enforcing this clause, so don’t worry they won’t enforce it on you. It doesn’t matter. You have to be insane to risks large amounts of capital (tens or hundreds of thousands of dollars of developer salaries to write the app, if nothing else) that Apple won’t choose to enforce this clause. No sane business manager would voluntarily add risk to an already risky proposition (most software projects fail) if they can at all avoid it.

Microsoft may have destroyed Netscape, and Digital Research, and dozens of other companies, with illegal abuse of their monopoly powers. But nothing they did threatened to bring the industry to a shuddering halt, ceasing all development of new and better ways of doing things. Microsoft never made Haskell illegal.

The danger isn’t just that Apple did this- the danger is that others may try what Apple did. Now that Apple’s broken the ice, what other companies might try this gambit? Who else might decide they want to control this or that API? Microsoft, Oracle/Sun, IBM, Adobe, probably others, all have APIs they might want to capture. One only has to look at the billions of profits Microsoft make from their captive API to understand the allure.

What can we do about this? What can we do to prevent being legally restricted from improving our industry? The one answer I have is to rain (metaphorical) death and destruction on to Apple. Make the iPhone an object lesson for future generations of executives, the Edsel you never want to emulate. Even an apology and a retraction of that clause isn’t sufficient, as that leaves open the door to the idea that maybe Apple didn’t handle it correctly, and that with the correct spin that it might work. I don’t want future executives to say to themselves things like “Well, Apple just screwed up with their choice of languages- if they had included more advanced languages like Haskell or Ruby, things might have worked.” No- the problem I have is with limiting language choice at all. I don’t want to get locked into having to choose between Clojure, Ruby, and Haskell, because tomorrow some new language will come out that is better than all of them, and I want to have the option to use that language as well.

So congratulations, Apple- you’ve just leapt to the top of my shit list, dislodging Microsoft from it’s traditional post at the top of that list. I hereby declare myself, officially, anti-Apple.

Hash tables revisted March 17, 2010 | 05:38 pm

Just a quick note, I wanted to point this paper out to everyone here. Basically, the author demonstrates a denial of service attack using engineered hash collisions to force the programs into worst case behavior situations, just like I commented way back then.

And I’m not sure how much faith I’d put into a new hash algorithm being the savior here. The security of the system now relies on the cryptographic robustness of the hashing algorithm- remember, the attacker only has to find a sequence which demonstrates worst case, or near worst case, behavior in order to launch the denial of service attack. So if there is a cryptographic flaw in the algorithm which allows a malicious attacker to discover collisions much cheaper than brute force, then it becomes computationally feasible for the attacker to compute the worst case sequence, especially once they put their botnet on to it.