Feeds:
Posts
Comments

Archive for the ‘Free Software’ Category

Today, I had the new experience of helping out with a podcast. Like most people, I hate hearing my voice (it always sounds clumsy and over-precise), and any wishful belief in my own eloquence wilts when I hear all the “ums” with which I punctuate my speech, but I hope I have the chance to take part in another one.

I could hardly be excluded from this one. After all, it was an article I published a few weeks ago, “GNOME Foundation defends OOXML involvement,” that sparked the podcast. Moreover, when Jeff Waugh of the GNOME Foundation first floated the idea, he had me in mind as a neutral third party, and I was the one who pitched the idea to Linux.com, the main buyer of my articles. Admittedly, it was an easy sell, since Robin Miller, the senior editor at Linux.com, is a part time video producer and always looking for ways to extend the print coverage on the site, but I was still the one who got things moving.

After stumbling into the center ring while technical problems occupied Robin and Rod Amis, the producer, and stuttering into the silence, I soon found my tongue. The experience was not much different, I found, from doing an ordinary interview or teaching a university seminar. In all three cases, your purpose is not to express your own opinions, but to encourage others to speak, and to clarify their vague references for the sake of listeners. The fact that there was an audience of about 650 – good numbers, Rod tells me, for a daytime podcast – didn’t really affect me, because I had no direct contact with them.

Jeff Waugh and Roy Schestowitz, the two guests on the podcast, have been having bare-knuckle arguments on various forums, so I was expecting to have to referee the discussion. In fact, the image kept occurring to me of those soccer referees who are sometimes chased off the field by irrate crowds. However, the slugfest I expected never materialized. It’s harder, I suppose, to insult someone verbally, even over the phone, that to fan a flame war on the Internet, and both were more polite live than they had ever been at the keyboard.

Besides, Robin has the voice of someone calmly taking charge without any expectation of contradiction. Perhaps, too, an echo of my old university instructor voice ghosted through my own words.
But, whatever the case, everyone survived. I even think that the increased politeness influenced both Jeff and Roy to make concessions to each others’ viewpoints than they never would have considered online. As a result, I think that the point that the dispute is one of tactics rather than of different goals came through for the first time in the month or more that this dispute has been unfolding. However, I’m not sure that either of the principals has made the same observation.

The show had glitches that better planning might avoid next time. However, I like to think that both sides had a reasonable chance to express themselves, so it could have been worse. The Linux.com regulars are already discussing the possibility of another podcast, and I, for one, can’t wait.

Read Full Post »

One of the hardest things about writing on free software is the expectations placed on me. Because the cause is good, many people expect me to write as a loyal partisan. And in one sense, I am: If I didn’t feel the topic was important, I wouldn’t write about it. However, I am not so partisan as to praise where I see problems in either software or people. Nor do I always feel an obligation to take sides when I explain a multi-side issue, or when the general reaction from typical readers is so obvious that to do would be to belabor the obvious. To me, these practices are part of my efforts to approach journalism with professionalism. However, judging from the comments I sometimes receive, they often enrage readers, especially those expecting a confirmation of their views.

Understand, I’m not naive. I know that complete objectivity is as impossible as a centaur. But I’m idealistic enough to think that, except when I’m writing an obvious commentary, the articles I write as a journalist are more useful to people when I’m not writing as an advocate. Rather, I try to write in an effort to express the truth as I see it. I’m sure that I fail many times, either because I don’t have all the facts or because I feel too strongly on a subject.

However, as George Orwell said about himself, I believe that, unlike the vast majority of people, I have the ability to face unpleasant truths – facts that I might dislike personally, but have to acknowledge simply because they are there (I lie very poorly to myself). And, since my first or second year at university, I’ve been aware that I have the unusual knack of empathizing with a viewpoint even while I disagree with it. With these tendencies, I believe that, if I make the effort, I can provide a broader perspective than most people – and that a broader perspective, if not the truth, is generally more truthful than a limited one.

Moreover, I believe that these are precisely the tendencies that a journalist needs to be useful to readers. Nobody can write uncritically about any cause without, sooner or later, lying for the sake of the cause and losing their integrity. For all I admire the ethics and hard work of many people in the free software community, even those I admire most sometimes express an ill-considered or an ignorant opinion. Some act short-sightedly. Very occasionally, a few act immorally, or at least for personal gain rather than the good of the community. And, whenever someone does any of these things, it’s my job to report the fact. To do otherwise would be against my principles, and a mediocre carrying out of my job.

This honesty is especially important in the computer industry. Many mainstream computer publications are notorious for avoiding criticism of the companies who buy advertising from them. Such publications are worthless to their readers, and a betrayal of the trust placed in them. I’m lucky enough to work for publications that don’t work that way, so I can report the bad along with the good.

However, to some of the audience, that’s not enough, especially on a controversial subject. They read to have their views enforced, and, if I don’t happen to serve their need, they accuse me of bias. Often, they need to cherry-pick their evidence to build the case against me, and usually they seize on the fact that I reported a viewpoint contrary to theirs without denouncing it. Often anonymous, they attack me in the strongest worded terms, sometimes explaining in exhaustive detail the error of my way in what usually amounts to a clumsy belaboring of the obvious.

Occasionally, one will demand the right to a rebuttal from the editors.
So far, I have yet to see any of them actually write the rebuttal, but I suspect that, if they did, it would probably be unpublishable without considerable revision. Polemic is a difficult art, and has a tendency to descend into trite comments and over-used expressions in the hands of novices.

(Which is another reason that I don’t write opinion pieces too often. They’re difficult to write well, and I don’t think I’m particularly skilled at them. And, anyway, a successful polemic is more about rhetorical tricks and memorable turns of phrases than about facts and explanation. It’s a play more on emotion than logic, and for that reason always seems a bit of a cheap trick. I’m not nearly as interested in manipulating readers as informing them.)

But what always tickles me about such accusations is that they frequently come in pairs. Many times, after writing on a controversial subject, I’ve been denounced as biased from both sides – sometimes on the basis of the same paragraph or sentence.

I suppose these twinned accusations could be a sign of sloppy writing on my part. However, I prefer to view them as a sign that the problem lies more in the readers than in me. If both sides find something to disparage in one of my articles, then I can’t help thinking that I’ve had some success with covering the topic comprehensively.

Of course, all these thoughts could be nothing more than an explication of my personal myths – the stories I tell myself to keep me going. The image of the investigative reporter who risks everything to get the truth out is still a very powerful myth, and one that I not only buy but apparently have a lifelong subscription to.

But, contrary to popular usage, a myth is not the same as a lie. And, in this case, I like to think that, even if I am partly deceiving myself, my work is still better for my acceptance of the myth.

Read Full Post »

I’m almost getting afraid to look at a newspaper or any other traditional print media. Every time I do, some writer or other seems to be belittling an Internet phenomena such as blogging, Facebook, or Second Life. These days, such complaints seems a requirement of being a middle-aged writer, especially if you have literary aspirations. But, if so, this is one middle-aged, literary-minded writer who is sitting out the trend.

The Globe and Mail seems especially prone to this belittling. Recently, its columnists have given us the shocking revelations that most bloggers are amateurs, that Facebook friendships are shallow, and that, when people are interacting through their avatars on Second Life, they’re really at their keyboards pressing keys. Where a decade ago, traditional media seemed to have a tireless fascination with computer viruses, now they can’t stop criticizing the social aspects of the Internet.

I suppose that these writers are only playing to their audiences. After all, newspaper readers tend to be over forty, and Internet trends are generally picked up those under thirty-five. I guess that, when you’re not supposed to understand things, putting them down makes you feel better if you’re a certain kind of person.

Also, of course, many columnists, especially those who aspire to be among the literati, see the rise of the Internet as eroding both their audiences and their chances of making a living. So, very likely, there’s not only incomprehension but a primal dose of fear behind the criticism that deserves sympathy.

At first glance, I should sympathize with them. I’m in their age group, share something of their aspirations, and I’m cool to much of the social networking that has sprung up in recent years. Yet somehow, I don’t.

For one thing, having been on the Internet several years longer than anybody else, I learned long ago that communities exist for almost everyone. If you don’t care for Facebook, you can find another site where you’re comfortable. If you dislike IRC, you can find a mail forum. If you can’t find a blog that is insightful and meaningful, you probably haven’t been looking around enough, but surely the Pepys’ Diary page will satisfy the most intellectual and literary-minded person out there. So I suspect that many of those complaining are still unfamiliar enough with the technology that they don’t really know all that’s via the Internet.

Moreover, although I ignore large chunks of the Internet, my only regret is that it hadn’t developed ten or fifteen years earlier so that I could have been a young adult when it became popular.

Despite, my age, the Internet has been the making of me. It’s helped to make the fantasy and science fiction milieu that I discovered as a boy become mainstream– and if that means people are watching pseudo-profundities like Battlestar Galactica, it also means that a few are watching movies Neil Gaiman’s Stardust or Beowulf and moving on to discover the stories and novels that really fuel the fields. It’s given me a cause worth focusing on in free software, and a job as an online journalist that already has been one of the longest lasting of my life, and that still doesn’t bore me. Without the Internet, I just wouldn’t be the person I am today.

Nor, I suspect, would I like that alternate-universe me very much.

Having absorbed the toleration that underlies much of the Internet, I can’t help feeling that criticizing other people’s browsing habits shows a lack of manners and graciousness that is grounds for shame rather self-righteousness. But, in my case, it would show a lack of gratitude as well.

Read Full Post »

Last week, ABC’s 20/20 ran a piece on the murder trial of Hans Reiser, the free software developer accused of murdering his wife in Oakland. I sighed in relief when it ran, because it didn’t include me.

It could have. Since I wrote one piece last year about Reiser’s problems with getting the Reiser4 filesystem accepted into the Linux kernel and another about what was happening with his company in the wake of the murder charges, I’ve fielded eight or nine requests from the mainstream media to talk about the background to the case. Since early summer, several of those requests were from ABC. But I never really felt comfortable doing so, although I made clear that I had no opinion one way or the other about the case, and only talked about Reiser’s work and reputation and what the free software community was like.

At the time, I rationalized my general comments as helping out other journalists. Also, considering that I’ve made a career out of explaining developers to non-developers, I figured that I might be able to see that the community wasn’t too badly misrepresented. And, let’s be honest, I was flattered.

But, simultaneously, I was uneasy, and this uneasiness continued to grow as ABC continued to talk to me. There was even talk of flying me down to San Francisco for a day to do an interview, which provoked a kind of Alice in Wonderlandish feeling in me. Spend the day travelling for something that I wasn’t that interested in? And going to San Francisco – one of my favorite cities – with no time for wandering around struck me as not worth the sense of self-importance such a trip would no doubt give me.

I tried suggesting other people in the free software community that ABC might contact. I even suggested one notoriously egotistical person, figuring that they would be pleased to be asked and would give ABC so much copy that its reporters would have no further need of me.

That only worked for a few weeks, then I received another phone call. At that point, I realized that I didn’t have a valid passport, which Canadians like me now need to fly to the United States. I explained this difficulty to a reporter, and how I didn’t really want the extra hassle of driving across the border and catching a flight in Bellingham – and he returned the idea of flying a camera crew up to Vancouver to talk to me.

I thought that unlikely, so I said that would be acceptable. For a while, I was worried that ABC might actually do it, too, but in the end the producers decided not to bother.

That was just as well, because in the interim, I had resolved to refuse the interview regardless of the condition. I took a while to understand my reluctance, but, what I concluded in the end was this: I didn’t want to feed my self-importance at the expense of the Reiser family. No matter what actually happened, those involved in the case are in a world of pain, and I didn’t want to piggyback on that pain for petty personal reasons.

And, ultimately, my reasons would be personal. No matter how well I can explain the free software community to the public, I’m far from the only one who can do so.

With this realization, I felt such relief that I knew that I had made the right decision. Now, I only hope that I can remain as sensible if someone contacts me about the case again.

Read Full Post »

Every month or so, I get a request from a magazine asking if I want to write about GNU/Linux or free software. One or two are legitimate professonal offers that I am glad to consider, if only for variation and to length the list of markets to which I can sell – or, to be more exact, to which I might some day sell, since I don’t have many open slots on my monthly schedule. However, more often, the magazine either doesn’t pay or else pays a token like $30 per page, and I have to decline, despite their offers of additional payment in copies or free advertising, neither of which I have much use for. The exchange never fails to leave me feeling guilty, defensive, and unsatisfied.

Admittedly, many magazines and publishers prey on wannabe writers’ desire to be published. However, I’m sure that many are doing their best, paying what they can and hoping that they might one day generate enough income to pay their contributors better. In fact, I am sure that most of them are sincere; they’re generally too excited about what they are doing to be deliberate exploiters.

This sort of low-paying work might have acceptable in the days when I was writing articles in my spare time and trying to build a reputation. I could have helped the editors, and they could have helped me. But how can I explain to these well-meaning people that I’m not just dabbling in writing these days? That in the time I wrote them a 1500 word article, I could have made ten or fifteen times as much writing for my regular markets? That I literally cannot afford to contribute to their magazine or web site?

I can’t explain, of course. Not without being completely undiplomatic and crass. So, I usually hedge until my correspondents’ persistence forces me to be blunter, or they come up with another argument.

Usually, the next argument is the idea – either openly stated or hinted – that, since all of us are interested in free software, then I am somehow obligated to give my labor for free.

Consciously or otherwise, this argument conflates the meanings of free software. Free software, as everyone constantly points out, isn’t free because it doesn’t cost. It’s free in a political or philosophical sense – and, on that score, I have a good conscience. It seems perfectly reasonable to me that, in return for the money I need to live, the markets where I publish should have exclusive rights to my articles for thirty days. After that, I am perfectly happy to have the articles reprinted or translated under a Creative Commons Attribution – No Derivatives license, In fact, I almost never refuse such requests.

Besides, are the people who trying to guilt-trip me donating their labor for free? In many cases, I doubt it.

Anyway, I maintain that, in keeping people informed about free software, I am already contributing to the greater cause. I happen to be one of those lucky enough or persistent enough to be able to earn my living through doing so, but I don’t see why the one should invalidate the other.

True, I do make some gratis contributions to free software in my own time – but that’s beside the point. What matters is that I don’t feel the need to prove my credentials, particularly to strangers I don’t know. So, at this point, they usually break off the correspondence, often with parting comments about my selfishness or lack of generosity.

And of course I do feel hard-hearted at times. But, when it comes to the way I make my livelihood, I have to ration my time. Otherwise, I could easily lose a large chunk of my income for the month. So, I break off, too, muttering my excuses after an exchange that has satisfied nobody.

Read Full Post »

Today, I received the following e-mail. At the sender’s request, I have removed any personal details:

I was wondering if you had any advice for me about how to perform some marketing/pr for my Linux [project]. I’ve started doing interviews with developers and I have created a community news site.

But is there anyway I could possibly get [my project] mentioned in a
magazine like Linux Journal? Is there any free advertising I could take advantage of on certain web sites? I thought you may have some ideas for me because you have experience with this kind of thing. Any help you
could provide me would be appreciated.

I generally receive about 3-4 requests of this sort a year, so I decided to post my reply here, so I can refer others to it:

You’re not likely to find free advertising on sites that will do you any good, so your best bet is to try to get on the various sites as a contributor. Linux.com only takes original material for its main features, but it does have the NewsVac items, the three or four line link summaries on the right of the page that are very popular. And, of course, sites like Slashdot, Digg, and Linux Today are all about links to already published material.

If you have a solid piece of news — which for a piece of free software usually means new releases and unique features — at Linux.com you can pitch a story and write it yourself. However, you’ll be asked to include a disclaimer
that explains your connection with your subject matter, and the article will be rejected if you are being a fanboy. That means you can’t review your own distro, but you might be able to do a tutorial on a distribution’s packaging system, for instance.

Alternatively, you can send news releases in the hopes of convincing either an editor or a writer to cover your news. However, don’t be pushy. Submitting a news release once is enough, and popping back several times to ask if it was received or whether anyone is interested will probably only guarantee that you’ll annoy people so that they won’t cover your news no matter how big it is.

The ideal is to build up an ongoing relation with a few writers, in which you give them stories to write about — we’re always looking — and they give you the coverage you want when you have news that readers might want to hear.

Of course, you open yourself up to negative comments if the software deserves them, but that’s the chance you have to take. However, for the most part, both commercial companies and large community projects find the
risk well worth taking. It’s not as though any of the regular writers deliberately sit down to review with a determination to be negative (although, conversely, they don’t set out to praise, either: We’re not just fans, either).

This process doesn’t happen overnight, so be patient. But, in the long run, you should get some of the publicity you seek.

I don’t know whether this information is useful to others. To me, it seems that I’m saying the obvious, but part of that reaction is undoubtedly due to the fact that I deal with these things daily. Perhaps to others, these thoughts aren’t obvious, so I’m hoping that someone will find them useful

Read Full Post »

Long ago, I lost any queasiness about the command line. I’m not one of those who think it’s the only way to interact with their computers, but it’s a rare day that I don’t use it three or four times on my GNU/Linux system. No big deal – it’s just the easiest way to do some administration tasks. Yet I’m very much aware that my nonchalance is a minority reaction. To average users, the suggestion that they use the command line – or the shell, or the terminal, or whatever else you want to call it — is only slightly less welcome than the suggestion that they go out and deliberately contract AIDS. It’s a reaction that seems compounded of equal parts fear of the unknown, poor previous experiences, a terror of the arcane, and a wish for instant gratification.

Those of us who regularly try two or three operating systems every month can easily forget how habit-bound most computer users are. The early days of the personal computers, when users were explorers of new territory, are long gone. Now, the permanent settlers have moved in. The average computer user is no longer interested in exploration, but in getting their daily tasks done with as little effort as possible. For many, changing word processors is a large step, let alone changing interfaces. And both Windows and OS X encourage this over-cautious clinging to the familiar by hiding the command line away and promoting the idea that everything you need to do you can do from the desktop. The truth, of course, is that you can almost always do less from a desktop application than its command line equivalent, but the average user has no experience that would help them understand that.

Moreover, those who have taken the step of entering cmd into the Run command on the Windows menu have not found the experience a pleasant one. DOS, which remains the command line that is most familiar to people, is an extremely poor example of its kind. Unlike BASH, the most common GNU/Linux command line, DOS has only a limited set of commands and options. It has no history that lasts between sessions. Even the act of navigating from one directory to the next is complicated by the fact that it views each partition and drive as a separate entity, rather than as part of a general structure. Add such shortcomings to the ugly, mostly unconfigurable window allotted to DOS in recent versions of windows, and it’s no wonder that DOS causes something close to post-traumatic stress syndrome in average users. And, not having seen a better command line interface, most people naturally assume that BASH or any other alternative is just as stressful.

Yet I sometimes wonder if the main reason for nervousness about the command line isn’t that it’s seen as the area of the expert. In recent years, many people’s experience of the command line is of a sysadmin coming to their workstation, opening a previously unsuspected window, and solving problems by typing something too fast for them to see from the corner into which they’ve edged. From these encounters, many people seem to have taken away the idea that the command line is powerful and efficient. That, to their minds, makes it dangerous – certainly far too dangerous for them to dare trying it (assuming they could find the icon for it by themselves).

And in a sense, of course, they’re right. In GNU/Linux, a command line remains the only interface that gives complete access to a system. Nor are the man or info pages much help; they are often cryptically concise, and some of the man pages must have come down to us almost unchanged from the 1960s.

The fact that they are also wrong is beside the point. Many users aren’t clear on the concept of root accounts, file permissions, or any of the other safeguards that help to minimize the trouble uninformed users can blunder into.

The trouble is, understanding these safeguards takes time, and investing time in learning is something that fits poorly with our demand for instant gratification. By contrast, using a mouse to select from menus and dialogs is something that people can pick up in a matter of minutes. Just as importantly, the eye-candy provided by desktops makes them look sophisticated and advanced. Surely these signs of modishness must be preferable to the starkness of the command line? From this attitude, insisting on the usefulness of the command line is an anachronism, like insisting on driving a Model T when you could have a Lexus.

The truth is, learning the command line is like learning to touch-type: in return for enduring the slowness and repetitiousness of learning, you gain expertise and efficiency. By contrast, using a graphical desktop is like two-fingered typing: you can learn it quickly, but you don’t progress very fast or far. To someone interested in results, the superiority of the command line seems obvious, but, when instant gratification and fashion is your priority, the desktop’s superiority seems equally obvious.

And guess which one our culture (to say nothing of proprietary software) teaches us to value? As a colleague used to say, people like to view computers as an appliance, not as something they have to sit down and learn about. And, what’s more the distinction only becomes apparent to most people after they start to know their way around the command line.

Whatever the reasons, fear and loathing of the command line is so strong that the claim that GNU/Linux still requires its frequent use is enough to convince many people to stick with their current operating system. The claim is no longer true, but you can’t expect people to understand that when the claim plays on so many of their basic fears about computing.

Read Full Post »

My review of the latest release of Ubuntu was picked up by Slashdot this week, releasing a flood of criticism.

Although the article praised Ubuntu, it was also one of the first to mention some of its shortcomings, so it probably provoked more reaction than the average review. Much of the criticism was by people who didn’t know as much about a subject as they think they do, and even more was by people who had either misread the article or not read it at all. But the comments I thought most interesting were those who criticized me for suggesting that in some cases Ubuntu made things too simple, and didn’t provide any means for people to learn more about what they were doing. Didn’t I realize, the commenters asked, that the average person just wanted to get things done? That few people wanted to learn more about their computers?

Well, maybe. But as a former teacher, I can’t help thinking that people deserve the chance to learn if they want. More – if you know more than somebody, as Ubuntu’s developers obviously do, you have an obligation to give them the opportunity. To do otherwise is to dismiss the average person as willfully ignorant. Possibly, I’m naive, but I’m not quite ready to regard others that way.

Anyway, which came first: operating systems like Windows that prevent people from learning about their computers, or users who were fixated on accomplishing immediate tasks? If computer users are task-oriented, at least some of the time, the reason could be that they’re conditioned to be so. Perhaps they’ve learned from Windows that prying into the inner workings of their computer is awkward and difficult. We don’t really know how many users will want to learn more, given the opportunity.

Nor will we, until we design graphical interfaces that give users the chance to learn when they want to. Contrary to one or two commenters, I’m not suggesting that every user will always want to do things the hard way and use the command line – I don’t always want to myself, although I gladly do so when typing commands is the most efficient way to do the task at hand.

But where did so many people get the assumption that there’s such a contradiction between ease of use and complexity, that choosing one means that you forgo the other? It’s mostly a matter of tidying advanced features into a separate tab, or perhaps a pane that opens to reveal features that a basic user doesn’t want.

However, when so many people believe in the contradiction, we’re not likely to see graphical interfaces that are as useful to demanding users as basic ones.

Even more importantly, I suggest that giving users the chance to educate themselves is a corollary of free software principles. If free software is only going to empower users theoretically, then it might as well not do so at all. To help that empowerment along, free software has to provide the opportunity for users to learn, even though few may take the opportunity. Yet, so long as the chance exists that any users want the opportunity, it needs to be offered.

Moreover, I believe that, given the chance, many people will eventually embrace that opportunity. The first time that they use a free software interface, they may be focusing mainly on adjusting to so much that’s new.

However, eventually, many of them will learn that they can do things their own way and take more control. And eventually, surrounded by such choice, many may take advantage of it. If they don’t know the choices are available because their desktop has been simplified until the choices are obscured, then the developers are doing them a dis-service.

Some might say that simplification is needed to attract people to GNU/Linux. Personally, though, I doubt that exactly the same thing they can get on Windows is likely to attract anyone. If free operating systems are going to get a larger market share, then it will most likely be by providing a new perspective on computing. I like to think that new perspective should be attempting to accommodate everyone, not just beginners.

Read Full Post »

Setting up a new workstation is the easiest time to choose a new GNU/Linux distribution. Having just installed Fedora 7 on my laptop so I’d have an RPM-based system available for my work, I seriously considered ending my five-year endorsement of Debian on my workstation. Perhaps I should follow the crowd and go to Ubuntu? Some other DEB-based distribution? Maybe Slackware or Gentoo to grab a bit of geek-cred? But after debating my choices for a couple of days, I decided to stick with Debian for both technical and philosophical reasons.

Oh, a small part of my decision was convenience. Over the years, I’ve built up three pages of notes of exactly what I need to install, configure, and modify to customize my workstation exactly as I prefer. Probably, I could port most of these notes to another distribution, but I would have to change some of the configuration notes, as well as the names of some of the packages. For better or worse, I’m comfortable with Debian — sometimes, I think, too comfortable.

However, a larger part of my decision is practical. Not too many years ago, Debian held a decided advantage because its DEB packages, if properly prepared, were one of the few that automatically resolved dependencies when you added software. That’s no longer true, of course, but Debian’s policy of packaging everything from kernels to drivers means that many installation tasks are far easier than in most distributions.

Moreover, I appreciate Debian’s policy of including recommended and related packages in the descriptions of packages. These suggestions help me to discover software that I might otherwise miss, and often help the packages I originally wanted to run better.

Another advantage of Debian is its repository system. As many probably know, Debian has three main repositories: the rock-solid, often less than cutting edge stable repository, the reasonably safe testing, and the more risky unstable. For those who really want the cutting edge, there is also the experimental repository. When a new package is uploaded, it moves through these repositories, eventually slipping into stable when it has been thoroughly tested. Few, if any distributions, are more reliable than Debian stable, and even Debian unstable is generally about as safe as the average distribution.

What this system means for users is that they can choose their preferred level of risk, either for a particular package or for their system as a whole. For instance, by looking at the online package descriptions, you can see what dependencies a package in unstable has, and decide whether installing it is worth the risk of possible damage to their system, or else judge how easily they can recover from any problems. This system means that most experienced Debian users have a mixed system, with packages from more than one repository — an arrangement that is far preferable to blindly updating because an icon in the notification tray tells you that updates are available. It also means that official releases don’t mean very much; usually, by the time one arrives, you usually have everything that it has to offer anyway.

In much the same way, each individual repository is arranged according to the degree of software freedom you desire. If you want, you can set up your system only to install from the main section, which includes only free software. Alternatively, you can also use the contrib section, and install software that is free in itself but which relies on unfree software to run, such as Java applications (at least until Java finishes becoming free). Similarly, in the non-free section, you can choose software that is free for the download but is released restrictive licenses, such as Adobe’s Acrobat and Flash players. Although my own preference is to stay with main, I appreciate that Debian arranges its repositories so that I can make my own choice.

Almost as important as Debian’s technical excellence and arrangements is the community around the distribution. This community is one of the most outspoken and free-thinking in free and open source software. This behavior is a source of irritation to many, including Ian Murdock, the founder of the distribution and my former boss, who thinks that the distribution would run more smoothly if its organization was more corporate. And, admittedly, reaching consensus or, in some cases, voting on a policy can be slow, and has problems scaling — problems that Debian members are well-aware of and gradually developing mechanism to correct without changing the basic nature of the community.

Yet it seems to me that Debian is, in many ways, the logical outcome of free software principles. If you empower users, then of course they are going to want a say in what is happening. And, despite the problems, Debian works, even if it seems somewhat punctilious and quarrelsome at times, insisting on a standard of purity that, once or twice, has even been greater than the Free Software Foundation’s. The community is really a daring social experiment, and its independence deserves far more admiration than criticism.

Of course, I could get many of the same advantages, especially the technical ones, from Ubuntu, Debian’s most successful descendant. But Debian has had longer to perfect its technical practices, and, if the Ubuntu community is politer, its model of democracy is further removed from the town meeting than Debian’s. Certainly, nobody can demand a recall of Mark Shuttleworth, Ubuntu’s founder.

Which brings up another point: I’m reluctant to trust my computer to an eccentric millionaire, no matter how benevolent. This feeling has nothing to do with Mark Shuttleworth himself, whom I’ve never met, and who, from his writing, seems a sincere advocate of free software. But one of the reasons I was first attracted to free software was because, in the past, my computing had been affected by the whims of corporation, notably IBM’s handling of OS/2 and Adobe’s neglect of FrameMaker. Trusting my computing to an individual, no matter how decent, seems no better. I’d rather trust it to a community.

And Debian, for all its endless squabbles and the posturing of some of its developers, has overall proven itself a community I can trust. So, at least for the time being, I’ll be sticking with Debian.

Read Full Post »

Having barely recovered from getting my new laptop set up, I spent this weekend setting up my new workstation. Since I only buy a new computer every three or four years, it’s a labor-intensive job – a real busman’s holiday, since I do a dozen or more installations of operating systems each year as a reviewer. It’s also a chance to learn first hand the recent changes to hardware.

Because I’ve used alternative operating systems as long as I’ve had a computer, I always buy my workstation from a shop that does custom work. That way, I can be sure that I buy both quality parts and ones that will work with my preferred operating system. The shop I’ve dealt with for my last purchases is Sprite Computers, a Surrey store that I recommend unreservedly to anyone in the Lower Mainland.

This year, buying a custom machine backfired unexpectedly: My Debian GNU/Linux system worked perfectly because I had checked everything I bought, but I had to download drivers for the ethernet, sound, and video cards for Windows. Apparently, GNU/Linux hardware support may have finally surpassed that on Windows, as some pundits have been saying. But it’s been ten months since I’ve had a Windows installation about the house, and the added bother makes me feel that I haven’t been missing anything (aside from some games, which I never have time to play any more, anyway). I keep a small Windows partition because I sometimes need to check a reference to the operating system in a review, but for personal use, I wouldn’t miss it (nor the twinge of guilt I feel as a free software advocate for having a copy of Windows in the first place).

Another advantage of getting a custom computer is that, in placing my order, I always hear the latest trends in the business. Talking over my order with a sales rep, I learned that Windows XP was outselling Vista by a ration of fifty to one. Furthermore, Windows XP is expected to stop selling next Febuary, but computer businesses are already stockpiling copies. So much for claims about Vista’s sales.

I also learned that LightScribe, the DVD-etching technology I tried for the first time on my new laptop, is in no greater demand, either. The drives and DVDs cost more for LightScribe, and it’s a slow, currently monochromatic technology that isn’t essential.

Similarly, the store sells video cards from NVIDIA than from ATI. That trend was already obvious the last time I bought, but it seems to have accelerated, perhaps because of NVIDIA’s aggressive marketing of other hardware products makes a bundle deal attractive. ATI’s sale to AMD may also make a difference, since manufacturers might be waiting to see what happens.

Of course, those who order custom computers are a small percentage of the public, but the comments I heard are interesting, all the same, since they are some of the few available from an unbiased source (that is, not from the manufacturer or a fan-boy review).

I infer other buying trends by the point at which increases in size or functionality suddenly take a jump in price. Sometime, this point is obvious from sales flyers that come to the door, but not always. For video cards, that point is 256 gigabytes of RAM. For hard drivers, it’s 500 gigabytes. For flat screen monitors, it’s 22 inches. Total system RAM is stalled at two gigabytes, apparently because Windows, which is the largest market, can’t handle more without an adjustment that most lay users don’t know. Generally, I find that ordering a system according to this point means that, three or four years in the future, I still have an adequate system, if no longer a cutting edge one.

For now, I appreciate a number of features in my new workstation. I can appreciate the increase speed, especially on GNU/Linux, which now zips along quite nicely. The dual-core processor, now standard on all new machines, makes multi-tasking smoother, too.

As for the wide screen monitor, which barely fits on the desk, that’s a practical change that I took to at once.

Yet I think the most welcome innovation is the cube case. Its dimensions – – 9 x 10 x 14 inches — small enough that I plan to put both my main and test computers under the same desk and use a KVM switch to move between them. Its blue light, although garish, means that I can crawl around under the desk chasing wires without carrying a flashlight. But, best of all, both sides are so well-ventilated that the overheating problems I’ve had in the hot weather may be a thing of the past.

These aren’t dramatic changes. Their relative modesty compared to changes in previous buying cycles suggests that the computer market is largely saturated and likely to remain so unless a breakthrough technology emerges. So, probably sooner than later, I will take the changes for granted. Just now, I shake my head when I realize that I now have flash drives with more memory than my first computer, but, on the whole, I don’t have a hardware fetish. Model numbers and stats seep through my head faster than they enter, and, so long as hardware works as advertised, I’m content. And I’m happier still to stop thinking of hardware, and get back to the business of writing.

Read Full Post »

« Newer Posts - Older Posts »