Feeds:
Posts
Comments

Archive for the ‘writing’ Category

Something has always bothered me about so-called celebrity bloggers, but I’ve never been quite able to identify it. I’ve vaguely thought that a lot of fuss was being made over very little, but never troubled to clarify the impression. The other day, though, I made a mental connection that explained why I was unimpressed.

When I was a university instructor, I did more than my share of first-year composition. When you’re new and being hired by the semester, that’s the price of clinging to the edges of academia. But the point is that, in most semesters, I would encounter students who had passed high school simply by completing every assignments. A few had even got scholarships because they had completed every assignment at exhaustive length. Often, some of these students would do poorly on their first few assignments – and, when they did, they couldn’t understand why. My explanation that, at university, you got marks for what you accomplished rather than what you attempted might have been talking in tongues for all the sense it made to them. How could they not pass? They had done the assignment, hadn’t they?

Too many celebrity bloggers, I concluded, were like these students. To a surprising degree, what they are known for is not for writing about interesting topics, or for insightful comments, or even for pithy turns of phrase, but for blogging and nothing else.

I remember that, at one networking event, the organizer announced that a celebrity blogger would be live-blogging the event. Immediately, everyone applauded, while the blogger looked around modestly. The blogger didn’t participate much in the event, being hunched over the keyboard of a laptop all evening, so naturally I expected some clear and concise reporting, if not the original insights along the lines of Joseph Addison’s or George Orwell’s.

What I found the next morning was an unfiltered stream of consciousness, perhaps of interest to the blogger’s friends, but no more intrinsically interesting than a conversation overheard on the bus. Authentic it might be, but also a well-bred bore, with little except basic literacy to recommend it.

The blogger, I realize now, was famous for blogging – not blogging well, but simply blogging. And, like the high school kids whose world view I used to detonate, to the blogger and their audience, that was supposed to be enough.

This impression was confirmed by a recent local blogathon, in which a number of these celebrity bloggers posted an entry every half hour for twenty-four hours, each trying to raise money for a favorite charity.

As a fund-raising idea, the blogathon seems futile and full of self-importance. Most people simply aren’t that interested in blogs. In every case where I could find figures, the amount of money raised was less than my average charity donation (and I’m far from wealthy).

But what matters here is how the effort was regarded. The organizer referred to participating in the blogathon as a “sacrifice” — mostly of time and sleep — when really it was nothing of the sort. It’s not a sacrifice when you get something in return, and, in my view, the sense of excitement and importance participants obviously received removed any sense of sacrifice from their efforts. And while such efforts are interesting when someone as accomplished as the American fantasist Harlan Ellison does them as a calculated bit of grandstanding (he has, for example, written in the window of a book store), I couldn’t help noticing that, in the case of the blogathon, what mattered in the blogathon was producing the requisite number of entries, not the quality of the entries.

Is anyone surprised that, except for an entry from a blogger who specialized in humor and one or two others, the entries were almost entirely void of interest for anyone except perhaps the bloggers and their immediate friends? Despite the popularity of personal journalism these days, it takes an expert to write a personal essay that interests acquaintances or strangers, and these didn’t. As Attila the Stockbroker used to say, it would take a mentally subnormal yak to care about most of the blogathon entries.

But that didn’t matter. What the blogathon participants care about was that, like my composition students, the fact that they had completed the assignments.

I don’t mean to insult celebrity bloggers by this observation. I’m friendly with one or two local ones, and, away from their obsession, some of them are interesting enough people. If they or their friends get pleasure from such entries, who am I to say that they shouldn’t? But I do mean to say that what they are doing is played by relaxed rules, and that I’m not interested imitating them.

For me, playing by real world rules is the only way worth playing. That doesn’t necessarily mean being paid for your writing (although it’s true that few reactions suggest that you are writing to at least a minimal standard than having someone buy the right to publish you). But unless my concern is catch the interest of others with every trick I can muster and risking failure, then I’m no better than a high school student expecting to be rewarded just for trying.

That’s fine for practice. But high school was a long time ago, and I prefer to operate by real world rules. If the rise of failure is greater (and I’m the first to admit that I’ve failed many times), then so is the chance of a truly satisfying success (and I’ve had a few of those, although far fewer than my ego likes to admit). In the end, what matters to me is not how much I write, but the reception it gets from readers.

Otherwise, in my own estimation, I am no better than those owners of one-person companies who call themselves CEOs – self-aggrandizing, lacking self-perspective, and more than slightly pathetic.

Read Full Post »

In the last few days, I’ve had several experiences that make me think about my role as a journalist in the free and open source software community:

The first was a reaction I had from someone I requested some answers from. Although I thought I was being polite, what I got back was an attack: “I am not prepared to answer any of these questions at this time. The intent of your article is to feed the flames and I will have no part in that. The fact that people like you like to stir up controversy is to be expected, since that is the job of any writer trying to get readers.”

This reply not only seemed presumptuously prescient, since I hadn’t written the article, or even decided what angle it would take, but also unjustifiably venomous, given that I didn’t know the person. Moreover, although I am in some ways a contrarian, in that I believe that questioning the accepted wisdom is always a useful exercise, when I write, I am far more interested in learning enough to come to a supported conclusion or to cover an interesting subject than I am in stirring up controversy for its own sake. The fact that an editor believes that a topic will get a lot of page hits is meaningful to me mainly because the belief sets me loose to write a story that interests me.

Still, I don’t blame my correspondent. He probably had his reasons for his outburst, even though they didn’t have much to do with me. But the fact that someone could react that way says some unpleasant things about some current practioners of free software journalism — things that alarm me.

Another was the discovery of the Linux Hater’s Blog (no, I won’t link to it and give it easy page hits; if you want to find it, do the work yourself). I don’t think I’ve ever come across a more mean-spirited and needlessly vicious blog, and I hope I never do. However, recently as I’ve been preparing stories, I’ve come across some commenters on individual mailing lists who were equally abusive. They are all examples, not only of what I never want my work to be, but the sort of writing that makes me scrutinize my own work to ensure that it doesn’t resemble them in anyway whatsoever.

Journalism that stirs up hate or encourages paranoia — or even journalism whose focus is sensationalism — is journalism played with the net down, and I’m not interested in it. Oh, I might make the occasional crack, being only human, or use the time-honoured tactic of saying something outrageous then qualifying it into a more reasonable statement. But, mostly, I prefer to work for my page hits.

Such sites also suggest that the line between blogging and journalism is sometimes being blurred in ways that aren’t very complimentary to bloggers. While some bloggers can deliver professional commentary, and do it faster than traditional media, others seem to be bringing a new level of nihilism to journalism.

A third is the unexpected death of Joe Barr, my colleague at Linux.com. Joe, better known as warthawg or MtJB (“Mister the Joe Bar,” a story he liked to tell against himself) encouraged me with his kindness when I was first becoming a full-time journalist. Later, when I started writing commentaries, his editorials were an indicator for me of what could be done in that genre. As I adjust to the idea that Joe isn’t around any more, I’m also thinking about how I’ve developed over the last few years.

The final link was a long interview – almost twice my normal time – with Aaron Seigo, one of the best-known figures in the KDE desktop project. One of the many twists and turns in our conversation was the role of journalism in free and open source software (FOSS). As Seigo sees things, FOSS journalists are advocate journalists, acting as intermediaries between FOSS projects and the larger community of users. He wasn’t suggesting that FOSS journalists are fan-boys, loyally supporting the Cause and suppressing doubts; nothing in his comments suggested that. But he was pointing out that FOSS journalists are an essential part of the community. In fact, much of what he said echoed my own half-formed sentiments.

Seigo also discussed how a small number of people making a lot of noise can easily deceive journalists who are trying to be fair and balanced by making the journalists think that the noisily-expressed beliefs are held by more people than they actually are. As he points out, the American Right has been very successful in this tactic, especially through talk-radio. He worried that part of the recent user revolt against KDE 4 might be due to something similar.

Listening to him, I tried to decide if I had fallen for this ploy in the past. I decided that I might have been, although usually I try not just to be thorough, but also analytical enough to sift down to the truth.

I was going to try to summarize what I had learned from these four separate experiences, but my efforts to do so only sounded sententious – to say nothing of self-important and over-simplified. But I’m thought of all four as I’ve exercised recently, and I’ll be thinking of them for some time to come, too.

Read Full Post »

On Canada Day, I had planned to be at a picnic at the Barnet Marine Park. However, when I came in from my morning run, a message from a publicist was on my business line. Would I be interested in a story?

Does a parrot love millet stalks? Suddenly, my plans for the day changed.

There’s something about a breaking news story that I can’t resist.

I wasn’t always that way. Several years, when I first started working as a journalist, I much preferred feature stories, although I didn’t know what they were called. With a non-timely story (as I called them to myself), I could take the time to get the facts right, and choose the details to use carefully – maybe even set each one aside a day or two before submitting it so I could reconsider the wording and structure in the cold, rational light of second thoughts.

By contrast, news stories terrified me. Writing for a largely North American audience, I am at a natural disadvantage compared to writers on the east coast, who are awake and working three hours before I am. The idea of rushing to finish a story in a few hours, especially with my time zone handicap, seemed rash. Undoubtedly, I would make a mistake.

But that was before I had tried writing a breaking story, and before I had made at least my share of mistakes. Now, the challenge exhilarates me.

In fact, a large part of the appeal lies in the challenges. In the space of a few hours, I have to decide who to interview, talk to them and transcribe the results, then produce some sort of coherent story. I’ve learned a lot of tricks of the trade in learning how to cram all these tasks into as little time as possible.

I’ve also learned to work through my fears, ignoring the little inner voice that is constantly yammering that I’m not going to finish in time – and that, too, is a form of challenge. Douglas Adams can joke all he want about the sound of deadlines whooshing over his head as they zoom by, but meeting a deadline – especially an impossibly close one – can be a measure of skill and a source of accomplishment, so long as you don’t have to manage the miracle four or five times a day.
Moreover, the flip side of urgency is a feeling of accomplishment when you’re done. And if you’ve beaten the other three or four writers on your beat and posted your story before they have posted theirs, then the accomplishment feels even greater. You’ve proven your ability to come through in a crisis.

Never mind that the portal sites will link to other stories on the subject as readily as to yours, or that the wisps of glory are ephemeral, blown away and forgotten within three or four days. For a few hours after your story appears, you can enjoy the delusion that you know something about writing after all.

These are all reasons why, despite all my efforts to be hard-headed, I can’t resist a news story. Let me sniff one, and I’ll foam at the mouth in my eagerness to enjoy the sense of purpose it brings.

Read Full Post »

The other night, I was lying on the futon when I noticed our parrots going absolutely rigid. Unlike their usual habit, when they see a crow or a seagull, they were not calling out. They were making small, disturbed chirps, and their feathers were tight against their bodies – a sure sign of agitation.

Looking outside, I couldn’t see any reason for their disturbance at first. Then I noticed crows and smaller birds streaking low into the trees, and I realized a predator must be in the neighborhood. Sure enough, after a moment, I spotted a bald eagle perched atop tree about a hundred meters from the window.

Most of what I could see with my unaided eyes was a black silhouette, since it was less than twenty minutes before sunset. Still, there was no mistaking what I was seeing. Although I had nothing I could compare the silhouette with to be sure of its size, the general outline was nothing like the crows that usually sit on that perch. It was longer and thinner. It didn’t move like a crow, either. It kept peering this way and that with a jerk of the head that was most uncrow-like, and fanning and unfanning its tail.

Nor could the avian reactions, both outside or in leave me with any doubt that I was seeing a predator. Outside, I could see more silhouettes streaking low across the sky behind the eagle towards shelter. Nearby, the usual sounds as the birds go to roost were completely missing from the night. Inside, our parrots were tense and straining forward to keep an eye on the visitor, ignoring everything else.

What interested me about the parrots’ reaction was that they had no trouble recognizing a predator when they saw one. Of our four parrots, at least one was taken from the wild as a baby, and one was born in our living room, and neither of them could have had any personal experience of raptors, yet both reacted exactly the same as the other two. Of course, nanday conures are a flock species, and alarms and greetings spread quickly, even between parrots who don’t like each other. Yet it seems clear that, at some instinctual level, they knew a predator when they saw one.

At the same time, the two on the futon were not so alarmed that they panicked. On some level, they seemed to know that they were far enough away not to be a main target. Possibly, too, they were aware of the window between them and the eagle; one of the first bits of training we do with all our birds is introduce them to the window, so that they don’t fly into it by accident. Instead of backing slowly away, as I half-expected, they not only stayed where they were, but actually moved forward a bit, craning, to get a better view. In other words, they were on alert, but seemed aware that they were safe. Perhaps what I was seeing was instinct and intelligence fighting for control.

After about five minutes, the eagle stirred abruptly, seeming to fall rather than fly from its perch. I soon found out why: a half dozen crows were charging it. A predator can make short work of a single crow, but a determined flock of crows outhinks and outguns it, and this eagle was obviously experienced enough not to challenge its attackers. Now its turn had come to seek shelter, and the last I saw, it was flapping furiously, trying to outdistance the crows and not having much luck.

The crows, no doubt, had a strong incentive. This past ten days or so, the first of this year’s baby crows have been taking their first flying lessons, leaving many of them stranded permanently or temporarily on the ground, or on remote perches without being quite sure how to get back to the nest. I had been dive-bombed several times myself because of my curiosity, and no doubt the eagle, for whom the crow fledglings provide an easy meal, had raised the ire of the adults.

Given the timing, you can almost imagine the adult crows acting like a fighter squadron, scrambling to get a response into the air as soon as possible to confront the danger. When you consider crows’ intelligence and social organization, that metaphor might even be a reasonably literal description of what happened.

With the eagle gone, our parrots relaxed almost instantly – another sign, I suppose, that they know exactly what a predator is. As for me, I was left with both a gut-level awareness of the eagle as predator and our parrots as prey species that I had never had before. And, for all my fascination with observing the reactions, I found that I was relaxing too, along with the rest of my flock.

Read Full Post »

“I don’t think I have ever been quoted as well by a reporter before.”

I like to think that I’m immune to compliments but this comment from someone I interviewed earlier this week got through my defenses. I interpret it as saying that I reported what the person said accurately.

Or, to be exact, my reworking of what the person said was a close reflection of their thoughts. Because, of course, no journalist quotes an interview subject word for word – unless, that is, they want to portray the subject as a incoherent half-wit. If you have transcribed as many interviews as I have, you’ll know that even the most fluent speaker can be made to look rambling and dull by quoting every little pause, space-filler, and change of direction in thought. To make the story read better, all journalists routinely edit quotations to help the continuity of their stories. If they are also ethical journalists, they do while making sure that they preserve the sense of what the subject said.

At any rate, the comment pleased me, because accurately reflecting what someone has to say is a skill on which I pride myself. When I pitch a story to my editors, I rarely have a fixed opinion on the subject, except when I’m writing a commentary. Instead, I want to write the story because I’m interested in learning more about the subject. My opinion emerges from the as I research the story and talk to different people; on those occasions when I do have an opinion on a subject, I frequently alter it as I develop the story.

This habit does little to soothe the nerves of potential interviewees who ask what my perspective on the subject we’ll discuss happens to be, or ask in advance for the questions I want answered. If I were being completely honest, I’d have to explain that, in most cases, I don’t have the least idea what the perspective will be in the story. Similarly, while I jot down topics I want to cover, I rarely prepare specific questions. When I do, the resulting article is never an example of my best work. Instead, I develop my questions while listening to the interviewee. But these explanations, I suspect, would not be believed by the suspicious. They’d be sure I had a hidden agenda. The more I explained, the more paranoid they would become.

All the same, they’re the truth. While I taught in an English Department when post-colonialism was the prevailing critical theory, I’ve never been a believer in completely subjective truth. At the risk of sounding naive, I believe, if not in objective truth, then in the effort to find it. I’m well aware that my bias creeps in to everything I writer, regardless of my intentions, but I don’t believe that my perspective is endlessly interesting, so I try to vary it with the opinions of those whom I talk to.

That’s not to say that I don’t have a define viewpoint by the time I finish an article – although I do try to subdue the expression of it, because I happen to think that a gently-delivered truth that guides readers to the conclusions I want to give them is more effective than a thundering oration. But if I want to persuade people to accept my outlook, I want to make my development of my points as accurate as possible to make them more logically acceptable.

So, yes, I do try to report the gist of what other people say. It is both part of my code of ethics and part of my style of discussion to do so. No doubt I often fail, both ethically and stylistically, but such are my ideals – and I’m warmed by the thought that someone has noticed.

Read Full Post »

After writing professionally on the web for several years, I’m no stranger to careless readers and wonky comments. I learned long ago that not only can you not count on everyone to read your thoughts carefully, but that some subjects, such as Microsoft’s intentions towards free software, cause many people’s critical facilities to go on holiday. But none of the subjects I’ve written about provoke as much blind reaction as the suggestion that grammar should be descriptive rather than prescriptive.

I was reminded of this fact recently when I noticed that my article “Tech-writers, Grammar, and the Prescriptive Attitude” had not survived a redesign of the Techwr-l site where it was originally posted. Because I still get requests for it, I asked Deb and Eric Ray, who maintain the site, to send me a copy of the published version, and posted it on my blog. The request also prompted them to repost the essay on the original site. In the days since, I’ve been fielding a comment or two per day from both sources, most of them sent privately.

Some people approve the sentiments in the article, but perhaps half are outraged. They set out to correct my thinking by pointing out that, without consistency, language ceases to communicate. Their assumption seems to be that only official English has consistency. I answer that all forms of English have their own rules – for instance, a double negative, which is considered wrong in official English, is perfectly understandable with the context of some Afro-American dialects (and, I might add, Old English). So far, none of these conversations have continued beyond this point.

Another conversation about the article begins with someone asking if I’m saying that we (by which the reader usually means sophisticated, literary folk like themselves) should go along with mispronunciations or incorrect uses, usually in sarcastic tones that suggest that, of course, I mean no such thing. I reply that I do mean exactly that, that, when a pronunciation or usage reaches a certain degree of popularity, it becomes standard usage.

To this comment, my correspondent usually replies that it is the duty of the literate to fight against such barbarisms. My response is that you are, of course, free to use any language that you care to, but, if you imagine that your example is going to inspire an outbreak of proper usage, then you think far too much of yourself. The most anyone can do is avoid usages that are vague in the name of clarity and personal style – and, at that point, another conversation peters out.

And these are just the most common ones. I’ve been accused of advocating complete chaos, of insisting on poetic language at the expense of clarity, and all sorts of other stances that have nothing to do with what I wrote, no matter how you construe my words. Often enough, people insist that I believe something that I frankly state that I do not believe. Apparently, many people – particularly those who work with words – have so much invested in their self-image as initiates into the secrets of proper grammar that any suggestion that their knowledge is as useless as heraldry immediately robs them of their ability to read and analyze text.

Personally, I find the idea that we have, not one but dozens of versions of English exciting and challenging. It means that, as a writer, I have more to explore than I can possibly learn in one lifetime. But that, unfortunately, seems to be a minority viewpoint.

A good thing that I thrive on being a contrarian, or I might find the hostile responses disheartening.

Read Full Post »

A correspondent tells me that Boycott Novell’s Free Software Credibility List gave me a rating of three on a six point scale (I could link, but I don’t want to give the site any more hits than I have to). Until hearing this news, I didn’t know about the list, because, so far as free software is concerned, I only read news sites and blogs with either technical knowledge or expert commentary. Usually, too, I make a habit of not commenting negatively in public on anyone with a claim – no matter how remote – to being a journalist. At the very least, I generally don’t mention them by name. However, since my informant seemed to think I should be upset, I’m making an exception here.

To be honest, I am more amused than angry about a list whose silliness is exceeded only by the self-importance of its owners. I mean,  how does any journalist, no matter how skilled a word-slinger, get the same rating as Stallman, the founder of the free software do? Yet several do. And why are authorities like Eben Moglen off the list?

I also notice that, at least in some cases, the list seems a direct reflection of how closely a journalist’s opinion corresponds with Boycott Novell’s, rather than any criteria that might be mistaken for objectivity. Robin Miller, the senior editor at Linux.com, is apparently denigrated because he took a group tour of the Microsoft campus a couple of years ago (I’m sure the fact that he presided over a podcast in which a Boycott Novell writer performed poorly has nothing to do with his ranking). Other writers seem to rate a 4 or 5 largely because they stick to technical matters and, rarely talking about philosophy or politics, say nothing for Boycott Novell to dissect for suspect opinions.

Strangely, the Boycott Novell cadre didn’t rate their own reliability, although whether that is because they are assumed to be the only ones who rate a perfect six or because the ranking doesn’t include negative numbers, I leave as an exercise to the readers.

From the link attached to my name, my own ranking seems based on the fact that I accepted that a comment signed with a Boycott Novell writer’s name really was by him; when he said it wasn’t, I accepted the claim and he suggested that I was owed “some apologies.” Yet, apparently I’m permanently branded as being only marginally trustworthy because of this minor incident. I suspect, though, that the writer’s belief that I lumped the Boycott Novell writers into the category of conspiracy theorists has more to do with my ranking than anything else.

But these foibles don’t disturb me unduly. Far from being upset, I’m glad of the list, because it gives me a goal. If I write consistently hard-hitting articles in which I dig carefully for facts, build a flawless chain of reasoning, and tell the truth no matter how uncomfortable the consequences, then maybe – just maybe – in a few years Boycott Novell will reward me with the ultimate accolade of a zero ranking some day. Then I’ll know when I have truly arrived.

And that is all that I intend to say on this subject. Ever.

Read Full Post »

(Note: I wrote the following article six years ago for the Techwr-l site. For a long while, it was my most-requested article. Recently, I noticed that a site makeover had left the article down. By now, it may be posted again, but I thought I’d reprint it here, although it’s very long by the standards of blog postings. Although addressed to technical writers, almost everything in it applies to writers and writing in general.)

Most technical writers are confused about grammar. On any day on the TECHWR-L list, basic questions are asked: “Is ‘User’s Guide’ or ‘Users’ Guide’ correct? Maybe ‘Users Guide?'” “Should ‘web’ be capitalized when used to refer to the World Wide Web?” “Which is right: ‘A FAQ’ or ‘an FAQ?'” Many of these questions become the major thread on the list for a day or two, generating far more debate than they’re worth.

The confusion isn’t so much about the grammatical points themselves. It’s about the nature of grammar in general. Apparently, many tech writers do not see grammar as a set of conventions to help them write clearly. Instead, to judge by the wording of the questions and responses, they see grammar as a set of unchanging rules that can provide definitive answers in every situation.

Some are afraid to break the rules of grammar and risk being denounced as incompetent. A handful, smugly sure that they know the rules, use their rote learning of the rules as an ad hominem attack, nitpicking at typos and small errors to discredit writers without disproving their viewpoints. Most sit in the middle, haunted by the ghosts of childhood grammar classes until they can hardly tell on their own authority whether they are writing well or not. But underlying all these reactions is an attitude that rules are rules, and cannot be broken.

This attitude is usually known as a prescriptive approach to grammar. It assumes that grammar exists mainly to tell us how to speak or write properly–not well. It is an attitude that tech writers share with almost everybody in the English-speaking world. It is a form of conditioning that begins in kindergarten and continues through high school and even into college and university. It undermines nearly everyone’s confidence in their ability to communicate, especially on paper. Yet it is especially harmful to professional writers for at least three reasons:

  • It grotesquely exaggerates the importance of grammar. Although competence in grammar is sometimes proof of other writing skills, it stresses presentation over content. Even worse, it stresses correctness over precision, conciseness, or clarity.
  • It binds writers to viewpoints that are not only arbitrary and obsolete, but, in some cases, far from their own opinions.
  • It undermines writers’ confidence and their ability to make decisions about how to communicate effectively.

Why are we burdened by this attitude? How does it affect us? The easiest way to answer these questions is to look at the origins of the prescriptive attitude and the alternatives to it. Only then can we begin to grasp how we can live without it.

The Rise of Prescriptive Grammar

Prescriptive grammars are the products of the Enlightenment. Earlier grammars such as William Bullokar’s in 1586 and Ben Jonson’s posthumous one were also prescriptive, but intended for language students. The first prescriptive pronouncements for native English speakers date to the Seventeenth and Eighteenth Centuries.

This is the start of the great era of describing and recording. In every subject from biology to Egyptology, educated men struggled to write accounts so thorough that no other one would ever be needed. It was also a time that looked both backwards and forwards to Classical Rome. It looked back in the sense that ancient Rome was seen as the height of civilization. It looked forward in two senses: England perceived itself as a second Rome, and Latin was the language of international science.

The first prescriptive comments were very much in the spirit of their times. Their compilers hoped for definitive grammars that would fix the form of
English once and for all, and provide a source for settling grammatical disputes. Since Latin was the language of the world’s most sophisticated civilization, the closer this fixed form of English was to Latin, the more sophisticated English would be. Moreover, given the belief that culture had been degenerating since the fall of Rome, most grammarians automatically equated any change in the language with decay and degeneration.

John Dryden, the poet and playwright, was among the first to make prescriptive pronouncements. His main contribution to prescriptive grammar was to suggest that prepositions should never end a sentence. The reasons for this prescription are that Latin sentences rarely end in prepositions, and that the word “preposition” clearly indicates that this part of speech should go before (“pre”) the noun it is associated with. Dryden criticized Shakespeare and Jonson for not following this rule, and scrupulously edited his own writings until they conformed to it.

Dryden also hoped to fix the form of English, which was still rapidly changing. Together with the diarist John Evelyn and other authors, Dryden called for an English version of l’Academie Francais–the body of scholars and writers that oversee the creation of the official French dictionary and is the arbiter of correct French. Dryden’s plea for an English Academy was echoed later by Daniel Defoe and Jonathan Swift. The idea was being seriously considered by Queen Anne when she died in 1714. But with the ascension of the German-speaking George I, the question shifted from the monarch helping to purify English to teaching the monarch English, and the idea was dropped.

Deprived of royal assistance, English men of letters did their best to improve the language on their own. In the mid-Eighteenth Century, a number of grammars were published, as well as Samuel Johnson’s famous dictionary, whose Preface spells out its prescriptive purposes with both succinctness and dry wit. All these works were heavily prescriptive, although Joseph Priestley did include some comments about the importance of common usage in deciding what was proper.

The most influential of these grammars was Robert Lowth’s “Short Introduction to English Grammar,” published in 1761. Criticizing almost every major English writer from Shakespeare to Pope, Lowth made most of the prescriptive statements that people still follow today. His prescriptions include:

  • Two negatives make a positive, except in constructions such as “No, not even if you paid me.”
  • Never split an infinitive.
  • Never end a sentence in a preposition.
  • “Ain’t” is unacceptable in formal English.

These ideas have several sources: An attempt to model English on Latin (and therefore to arrive at a universal grammar that underlay all languages), a desire to be scientific, and Lowth’s personal preferences. For instance, Lowth would not accept split infinitives because Latin infinitives are one word and cannot be split. Similarly, two negatives make a positive because they do so in mathematics. The ban on “ain’t,” though, seems entirely Lowth’s idiosyncrasy. None of these prescriptions, however, took any notice of how people actually spoke or wrote. For example, despite Lowth, “ain’t” continued to be used by Queen Victoria and the upper classes until the start of the Twentieth Century.

Although Lowth later became Bishop of London, his ideas on proper usage would probably have remained obscure if they had not been borrowed for classroom use. In 1781, Charles Coote wrote a textbook grammar based on Lowth’s Grammar, adding his own preference for “he” and “his” as the indefinite personal pronoun (as in “everyone is entitled to his opinion”). A few years later, Lindley Murray borrowed from Lowth to write a series of textbooks for a girls’ school. Murray’s textbooks became so popular that they quickly became the standard English in American schools throughout much of the Nineteenth Century. With modifications, Murray’s “English Grammar” and Lowth’s Grammar have been the basis for textbook grammars ever since. With their unyielding rules, these textbooks have given at least eight generations of English-speakers the prescriptive attitude that inhibits people today.

The Biases of Prescription

In their language and their purposes, prescriptive grammars make a strong claim to objectivity. A. Lane was typical of the first grammarians when he wrote in 1700 that the purpose of grammar was to teach people to speak and write “according to the unalterable Rules of right Reason.” Similarly, in writing his dictionary, Samuel Johnson humorously referred to himself as a “slave of Science.” These claims are still echoed when modern defenders of the prescriptive attitude assume that the rules of grammar are value-free. Yet a closer look at prescriptive grammar reveals distinct biases. Some of these were openly admitted by the first grammarians. The problem is that some of these biases are no longer current. Others are demonstrably false.

The early grammarians’ claims to be scientific or precise concealed a strong personal bias. This bias was by no means a conspiracy–it was simply natural self-expression. Grammarians were highly educated, or the subject would hardly come to their attention at all. Since education was a privilege, they were either well-to-do or extremely talented. Since the higher levels of education were barred to women, they were male. And, as might be expected from the subject, they were all intensely literate.

While this background made the first grammarians supremely qualified for literary and scholarly work, it also carried a certain arrogance. The first grammarians showed scant interest in how English was used outside their own class and social circles. All of them assumed an unwarranted authority in their subject, appointing themselves as the arbiters of the language without any justification except their willingness to serve. Robert Lowth, for example, did not hesitate to include his own idiosyncrasies as grammatical rules. In much the same way, Samuel Johnson saw it as his “duty” to purify English usage through his dictionary. This same attitude continues in the prescriptive attitude today.

Moreover, the first grammars were a direct reflection of their authors’ bias and education. The problem is, the education of the Restoration and Enlightenment includes assumptions that we would question today. For example:

  • Rome is the model for all things: In fact, Latin is a poor model for English. Although both languages are Indo-European, the relation is indirect. Even the heavy influence of French, a Latin-derived language, via the Norman Conquest, does not make English’s structure much closer to Latin. At its core, English is a Germanic language, and if the first grammarians had used Dutch, Swedish, or German as a model, they would have had no precedent for objecting to split infinitives or double negatives. However, the Germanic origins of English were poorly understood in Britain during the Seventeenth and Eighteenth Centuries. At any rate, the first grammarians would probably have considered Germanic models too crude to replace the polished perfection of Latin.
  • Writing is the basis for English grammar: Like other grammarians, Samuel Johnson assumes that the principles of grammar should be taken from the written form of the language. Since Johnson was a writer himself, this assumption is understandable. However, modern linguistics regards written usage as simply one of many types of English, none of which is more valid in the abstract than any of the others. If anything, the spoken language is usually given greater priority today, partly because it tends to be the source of innovation, and partly because it reveals how people use the language when they are not trying to write correctly or formally.
  • Change is degeneration: When the narrator of “Gulliver’s Travels” visits Laputa, he is shown a vision of the Senate in Ancient Rome, then the modern English Parliament. The Senators look like demigods and heroes, the Members of Parliament scoundrels and ruffians. In writing this passage, Jonathan Swift reflects a widespread belief in his time that times are getting continually worse. This fallacy is the exact opposite of the modern one of equating all change with progress.

Applied to the English language, Swift’s view has little basis in fact. Admittedly, English has simplified itself over the centuries by dropping most noun declensions and verb conjugations, but that has not made it less useful for communication. Nor, despite the delicate shudder of prescriptive grammarians, does the shift in meaning of “cute” from “clever” to “attractive” in the early Twentieth Century or of “gay” from “happy” to “homosexual” in mid-Century weaken a language with as many synonyms as English. When clumsy or unclear constructions do arise (such as the use of “not” at the end of a sentence), their impracticality generally ensures that they are brief fads.

At any rate, what is degenerate and what is progressive is often a matter of opinion. To J.R.R. Tolkien, the Anglo-Saxon scholar and author of The Lord of the Rings, the hundreds of words added by Shakespeare and his contemporaries are a corruption that ruined English forever. Yet to most scholars, these coinages are an expression of a fertile inventiveness and part of the greatest literary era ever known.

  • London English is standard English: Like most English writers, the grammarians and their publishers centered around London. The language of the upper classes in the Home Counties had already become Standard English by the Fifteenth Century, which is why many people have heard of Chaucer and few people have heard of (much less read) “Sir Gawain and the Green Knight,” a brilliant poem by one of Chaucer’s contemporaries, written in an obscure North Country dialect. That is also why Chaucer and Shakespeare poke fun at other dialects–they already had the idea that some forms of English were better than others. In basing their work on the English spoken around London, the grammarians were simply working in a long-established tradition.

Far from seeing their biases as contradicting their claims to scientific objectivity, the grammarians openly proclaimed their goal of saving English from itself. In fact, by the standards of the time, proclaiming their educational and cultural assumptions was a means of asserting their ability to be objective on the matter.

Of all the early grammarians, the one most keenly aware that prescriptive grammars were biased was Noah Webster, the writer of the first American dictionary. Working on his dictionary between 1801 and 1828, Webster was not content simply to record how words were used. Instead, he was also concerned with producing a distinctly American language. To this end, Webster not only included words such as “skunk” and “squash” in his dictionary, but also introduced American spellings, such as “center” instead of “centre.” In addition, he encouraged the spread of a uniquely American pronunciation by consistently placing the stress on the first syllable of the word. In other words, Webster attempted to deliberately manipulate the use of English in the United States for patriotic reasons. Whatever anyone thinks of those reasons, Webster’s efforts are one of the best proofs that prescriptive grammars are not as value-free as many people imagine.

The same is true today in the debate over whether “they” can be used as the indefinite pronoun instead of “he/his” (“Everyone is entitled to their opinion”). On the one hand, traditionalists who insist on “he/his” are perpetuating the male bias of the first grammarians. On the other hand, reformers who favor “they” are trying to remake the language in their own world view. It is not a question of objectivity on either side. It is simply a question of which world view will prevail.

The Problem of Change

But the major problem with prescriptive attitude is that it resists the fact that languages are continually changing. If a community newspaper constantly includes editorials urging people to drink less, the amount of concern suggests a local drinking problem. In the same way, the constantly expressed wish to set standards for the language reflect the massive changes in English at the time that the first grammarians worked. Although the rate of change was probably slower than that of the Fifteenth and Sixteenth Centuries, English was still changing much faster between 1650 and 1800 than it does today.

Some of the changes that occurred or were completed in this period include:

  • The disappearance of dozens of Old English words like “bairn,” “kirk,” and “gang” (to go). Many of these words survived in Northern and Scottish dialects for another century, but became non-standard in written English.
  • The addition of dozens of new words. Some were deliberately coined by scientists, such as “atom.” Some were borrowed from the regions that England was conquering, such as “moccasin” or “thug.”
  • The replacement of “thou” and “ye” with “you” in the second person plural. These forms survived only in poetry.
  • The standard plural became “s” or “es.” Only a few exceptions such as “oxen” survived.
  • The loss of all case endings except the third person singular in most verbs (“I read,” “he reads”). Some of the older forms ending in “th” survived longer in poetry.
  • The loss of inflection in most adjectives.
  • The regularization of past tenses to “ed” or, occasionally, “t” (“dreamed” or “dreamt”).

Many of these changes are easy to overlook today because popular editions of texts from this period routinely modernize the spelling. However, the sheer number of changes makes clear that the early grammarians were fighting a rear guard action. If they have helped to slow the rate of change in the last two centuries, sometimes they have also accelerated it; the loss of many Northern words, for example, is probably partly due to the standardization on Home County English. Yet, despite these efforts, English continues to change as the need arises. Many of these changes come from the least educated parts of society–those ones least likely to be influenced by the prescriptive attitude.

Today, all prescriptive grammarians can do is resist changes as long as possible before accepting them. This constant retreat means that most prescriptive grammars are usually a couple of decades behind the way that the language is actually used in speech and contemporary publications.

The Descriptive Alternative

While prescriptive grammars were finding their way into the schools, an alternative approach to the study of language was being developed by linguists. Imitating the naturalists of the Eighteenth Century, linguists began to observe the pronunciation, vocabulary, grammars, and variations of languages, and began cataloging them in ways that suggested how they related to each other. In 1786, Sir William Jones established that most of the languages of India and Europe were related to each other. By 1848, Jacob Grimm, one of the famous Brothers Grimm of fairy tale fame, had detailed in his History of the German Language how English, Dutch, German, and the Scandinavian languages had descended from languages like Gothic. Content to observe and speculate, the early linguists developed what is now called the descriptive approach to grammar.

The descriptive approach avoids most of the distractions of prescriptive grammars. Today, most linguists would probably accept the following statements:

  • Change is a given. In fact, a working definition for linguistics is the study of how languages change. Linguists can offer a snapshot of how a language is used, but that snapshot is valid for only a particular place and time.
  • No value judgement should be placed on changes to a language. They happen, regardless of whether anyone approves of them or not.
  • The fact that one language is descended from a second language does not make the first language inferior. Nor does it mean that the first language must be modelled on the second.
  • Linguistics does not claim any special authority, beyond that of accurate observation or verifiable theory. Claims are open to discussion and require validation before being accepted.
  • No form of a language is given special status over another. Regardless of who speaks a variation of a language, where it is spoken, or whether it is oral or written, all variations are simply topics to be observed. For example, when Alan Ross and Nancy Mitford coined the phrases “U” and “non-U” for the differences between upper and middle class vocabularies in Britain, they did not mean to suggest that one should be preferred (although others have suggested that deliberately using a U vocabulary might be a way to be promoted).
  • Variations of a language may be more or less suitable in different contexts, but none are right or wrong. The fact that you might speak more formally in a job interview than at a night club does not mean the language of a job interview is proper English, or that the language of the night club is not.
  • Proper usage is defined by whatever the users of the language generally accept as normal.
  • A language is not neutral. It reflects the concerns and values of its speakers. For example, the fact that every few years North American teenagers develop new synonyms for drinking and sex reflects teenagers’ immense preoccupation with the subjects. The fact that they also develop new synonyms for “slut” reflects their sexual morality as applied to girls. A similar viewpoint is known in psychology as the Sapir-Whorf hypothesis.

To those conditioned by prescriptive grammars, many of these statements are unsettling. For instance, when I suggested on the TECHWR-L list that proper usage was determined by common usage, one list member went so far as to call the view “libertarian.”

Actually, these statements are simply realistic. Despite two centuries of classroom conditioning, average users of English have never been overly concerned about the pronouncements of prescriptive grammar. Instead, people continue to use English in whatever ways are most convenient. If the prescriptive usage is not widely used in practice, it may even sound odd, even to educated people. For example, to an ear attuned to the spoken language, the lack of contractions in an academic essay or business plan may sound stilted. In fact, an entire written vocabulary exists that is almost never used in speaking. The descriptive approach simply acknowledges what has always been the case. In doing so, it frees users from the contortions of prescriptive grammar, allowing them to focus on communication–where their focus should have been all along.

Professional Writers and Grammar

Writing well, as George Orwell observes in “Politics and the English Language,” “has nothing to do with correct grammar and syntax.” If it did, then two centuries of prescriptive grammar in the classroom should have resulted in higher standards of writing. Yet there is no evidence that the language is used more skillfully in 2001 than in 1750. The truth is that, prescriptive grammar and effective use of English have almost no connection. A passage can meet the highest prescriptive standards and still convey little if its thoughts are not clearly expressed or organized. Conversely, a passage can have several grammatical mistakes per line and still be comprehensible and informative. Prescriptive grammars are interesting as a first attempt to approach the subject of language, but today they are as useless to writers as they are to linguists. So long as writers have a basic competence in English, prescriptive grammar is largely a distraction that keeps them from focusing on the needs of their work.

By abandoning prescriptive grammar, writers shift the responsibility for their work to themselves. In practice, this shift means making choices that are not right or wrong in the abstract, but, rather, useful in a particular context or purpose. For example, instead of agonizing over whether “User’s Guide” or “Users Guide” is correct, writers can choose whichever suits the situation. They can even flip a coin, if they have no better means of deciding. In such cases, which choice is made is less important than using it consistently throughout the document to avoid confusion. Even then, writers may decide to be inconsistent if they have a good reason for being so.

Similarly, the decision whether to use a particular word or phrase is no longer a matter of referring to a standard dictionary or somebody else’s style guide. Instead, writers have to fall back on the basics: Will the intended audience understand the word? Is it the most exact word for the circumstances? Does it convey the image that the company wants to present? In the same way, while the need for clarity and a factual tone makes complete sentences and unemotional words useful choices in typical manuals, in a product brochure, sentence fragments and words heavy with connotation are more common. Writers may still want to summarize their decisions in a corporate style guide, but the style guide will be based on their own considerations, not the rules that someone else tells them to follow.

None of which is revolutionary–except that, under the prescriptive attitude, irrelevant purposes are often inflated until they become more important than a writer’s practical concerns.

That is not to say that taking a descriptive approach to grammar means writing in the latest slang. Nothing could date a document faster, or be more intrusive to a technical manual. Nor does it mean abandoning technical vocabularies that are known to the audience or that make explanations easier. If anything, a descriptive approach demands a much greater awareness of the language than a prescriptive one. Instead of learning the single correct version of the languages, writers who take a descriptive approach need to be aware–probably through constant reading–not only of dozens of different versions, but of how each version is changing.

If necessary, writers can use descriptive grammars such as journalistic style guides to help them. On the whole, however, the descriptive approach leaves writers where they should have been all along, deciding for themselves what helps their documents to achieve their purposes. The only difference is that, under the descriptive approach, they are fully aware of their situation. If they say anything unclear or stupid, they can no longer hide behind tradition.

Prescriptive grammar is useful for teaching English as a second language, but it has little value for the practicing writer. Clinging to it may provide emotional security, but only at the expense of making writing harder than it needs to be. The culture-wide devotion to it will not be changed in a moment. But conscientious writers can at least change their own habits, and make life easier for themselves. And, from time to time, they can even laugh some worn-out, crippling concept — such as not ending a sentence in a preposition, or not splitting an infinitive — into the recycle bin where it belongs.

(With apologies to George Orwell.)

Read Full Post »

Since I’m a Canadian, Memorial Day doesn’t mean much to me. Our May long weekend is Victoria Day, and is often the weekend before. From the times I’ve been travelling in the United States on the Memorial Day long weekend, it seems to involve a lot of parade drill from everyone from octogenarians to people in wheelchairs – and as a sport, parade drill is low on the list for breakneck action and usually looks faintly ridiculous to my outsider’s eye. But I do remember one unforgettable Memorial Day, when we visited the fantasist Avram Davidson at the Veteran’s Hospital in Bremerton, Washington.

As you may know if you have any taste for literary fantasy, Avram was one of the great fantasists and humorists of the 20th Century. But, as sometimes happens with greater writers, he was not very skilled at taking care of himself. Through poverty, he had developed the habit of living in small towns where rents were cheaper, and, according to him, moving on when he had exhausted the local library.

In the last couple of years of his life, this habit had brought him to Bremerton. When his long-neglected health began to fail, he landed in the local veteran’s hospital, thanks to his service in World War 2 as a hospital corpsman in the Marine Corp in the Pacific. There, from the narrow confines of his room, he fought a running battle with the bureaucrats of the hospital and of Veteran Affairs, none of whom were used to dealing with patients who were not only highly intelligent but who had a high degree of curmudgeon and anarchist in their mental makeup.

Perhaps it was a campaign in this ongoing battle that prompted Avram to invite everyone he knew within a day’s travel distance to the hospital’s Memorial Day celebration, just to annoy his opponents. Or maybe Avram’s famous generosity, so long denied because of his poverty, seized on the celebration as a overdue way to treat his friends and repay them for their visits. He could, too, have been restless in the limitations of his life, and worrying that he might not have long to live.

Knowing Avram, the invitation was probably extended for all these reasons. But, whatever his motivations, the invitation went out, and we drove down from British Columbia that morning with all the excitement that inhabitants of Hobbiton must have tramped over to the party field to celebrate Bilbo’s birthday party.

The trip was memorable as the only one we ever took south of the border in which the American customs guard did not interrogate us on the strength of our rustbucket Maverick.. In those days (and possibly still, for all I know), custom jobs were veteran-preferred postings. The second that the guard heard that we were visiting the veteran’s hospital, he smiled and waved us through without another question.

When we got there, we found that the hospital had laid dozens of tables out on the lawn. The celebration was in full swing, but we had no difficulty finding Avram. He was sitting as far away from the bandstand as possible, surrounded by a dozen people, holding court in his wheel chair and telling stories about recent and past events.

At this late date, I don’t remember everything everyone said. But I do remember that, when someone noted that a tavern sat just beyond the hospital grounds, Avram said that many of the patients would go to any length to get to the hard liquor served at the tavern. When the hospital tried to discourage the custom by planting a pole in the gap of the fence, so that wheelchairs couldn’t squeeze through it, wheelchair patients would drag themselves along the fence, inch by painful inch, to get to the tavern. On Friday nights, he said, they looked like insects spread across a windshield as they clung there.

At some point, too another guest took out a letter he had been asked to forward to Avram six months ago. To my surprise, it was from me – I had completely forgotten the incident.

The stories and jokes went on, many told by Avram, but others contributing their share as well. A band arrived and played the usual American patriotic songs. We continued talking, oblivious to the occasional glares from other visitors. We lined up for food, and the hospital staff glared at our numbers and said nothing. The celebrations ended, and staff started to clean, until only our table was left standing, and still we talked. We didn’t care. I don’t know how Avram’s other visitors were feeling, but I felt as though I had stumbled into a London coffee house on an evening when Samuel Johnson was holding forth, and I didn’t want the evening to end. If we hadn’t had a ferry to catch and a two hour drive on the other side, we might have stayed until midnight.

As things happened, that was the last I saw of Avram. He was dead less than a year later, having left the veteran’s hospital for a basement suite in the town just before the bureaucrats could throw him out. I understand that he was only found a couple of days after he died, and I don’t like to think about his final moments alone.

Instead, I prefer to think of him as I last saw him when I looked back across the grass. He looked tired, but he was obviously in his element, telling stories and laughing at what other people said – a master storyteller even in his leisure.

Read Full Post »

In the past, I’ve described bloggers as amateur journalists. Those who are good enough and ambitious enough eventually find paying gigs and become professional. Broadly speaking, that’s still true, but I now think that’s incomplete. Where a professional journalist is constrained to follow a code of ethics in doing reviews, bloggers only need to follow their consciences. And, for some, their consciences are not enough.

As a professional journalist, I am required by my editors to follow a well-recognized set of guidelines in dealing with my subject matter. If I write about an organization to which I have connections, I’m supposed to disclose that connection, if only at the end of the story. If I receive a piece of proprietary software (not that I ever get much, since I cover free and open source software), I either return it or throw it away when I’m finished with the review. Similarly hardware (again, I don’t get much; due to the vagaries of the tariffs imposed by Canada Customs, few companies are willing to ship from the United States to Canada), I return it to the sender when I’m done.

This basic code of ethics isn’t always comfortable. It means, among other things, that I don’t take out membership in the Free Software Foundation, even though I support that organization’s goals, because I might be tempted to pull my punches should a time ever come when I need to criticize freely. But I try to follow it because part of what I sell is a truthful voice. Unless I make an effort to keep that voice, then what I write is useless.

Probably, the editors I sell to regularly wouldn’t fire me if I knowingly lapsed from these standards. But they would reprimand me the first time, and would probably stop buying my work if I continued in the ethical lapse. They have their own credibility to consider, and buying tainted work doesn’t enhance it. And, at the risk of sounding priggish, I accept these standards as natural and, if not ideal, then at least the best that can be followed to retain integrity.

Imagine my shocked innocence, then, when I discovered that some bloggers do not consider themselves similarly restrained (I won’t name them; I have no wish to pick a fight, and the names don’t matter as much as the behavior). At least one well-known blogger openly advertises on his front page how much he charges to blog about a product. Another blogg accepted samples of moderately priced merchandise to write about it. Then, when the advertising agency that connected them with the manufacturers changed the rules on them but continued to invite them to participate in such campaigns, they were conscience-free enough to complain of maltreatment and spamming. Others also complained about spamming by the same advertiser, but expressed wishes that they could have qualified to take part in such a campaign.

To say the least, these people live in a very different ethical universe than me – and, by extension, than other professional journalists. And, much as I hate to say it (since they all seem decent enough people when I’ve met them socially), their definitions of acceptable behavior makes everything they write unreliable. Unless they announce that they’ve changed their ways, how can I know that what they write is a honest opinion, and not a bought one? Even if they’re writing on an innocuous subject, I’ll always wonder if their opinions are tainted.

Am I being too rigid here? Nobody else seems to be bothered by such behavior, so why should I be? Maybe my self-mocking description of myself as a modern Puritan has more truth than I realized.

All the same, I keep thinking of the comedian Bill Hicks’ comment about people who do product endorsements: “Do a commercial, and you’re off the artistic roll call. Every word you say is suspect, you’re a corporate whore. End of story.”

Read Full Post »

« Newer Posts - Older Posts »