Feeds:
Posts
Comments

Posts Tagged ‘Personal’

Ever since I learned to read, I’ve been a chain reader, sometimes literally finishing one book and picking up another one. Books have been my refuge from the bleakness and bad news of the day, a way to while away time while in line at the store, and my companion on constant rides on transit and planes. I even shave while reading to alleviate the boredom of the task (obviously, I use a safety razor). And, inevitably, I re-read.

The first time I read a book, I may have many motivations. Obviously, I need to have an interest in the topic or the writer, but I’m not a very discriminating reader, so that hardly narrows down why I might read something – everything from graphic novels to Middle English poetry might seem interesting to me in different moods. At times, I read because the writer has a reputation, and I want to push back the boundaries of my ignorance a furlong or two. At other times, I read because I’ve been given a book (I count heavily on friends to urge on me books that I might not pick for myself, and, often enough, I find myself pleasantly surprised). Still other times, I read because nothing better is at hand.

However, why I re-read is easier to delineate. I rarely re-read non-fiction from cover to cover, although I might return to particular pages when researching or needing to prod my memory. Mostly, what I re-read is fiction. If I was trying to be a snob, I would claim that I re-read only worthwhile books, but that would be a half-truth. Unless my tastes change, I doubt I’ll re-read standards of the literary canon like Henry James or Anthony Trollope; I recognize that their writing shows some skill, but, like opera, it’s a skill I recognize without appreciating.

It would be more exact to say that I re-read fiction whose skill has impressed me with its craft, regardless of how the canon regards it: Charles Dickens, but also Wilkie Collins; John Fowles and Lawrence Durrell, but also any number of writers who labored their life away in the science fiction ghetto.
What others think of my taste makes little difference to me (although I confess I can’t quite bring myself to read graphic novels on the bus). Instead, what matters is that the work shows some skill. The over-maligned Stephen King, for instance, is a master at pacing and observation of Americana – two skills that are usually missing from the academic’s checklist for greatness, but which average readers reward unconsciously by purchasing his work.
However, the books I re-read the most are those that are not only give aesthetic pleasure, but also reinforce my world view. Three books (or series) in particular come to mind: T. H. White’s The Once and Future King, which shaped my sense of right and wrong; J. R. R. Tolkien’s The Lord of the Rings, from which I learned the core values of endurance and rising to the occasion; and Patrick O’Brian’s Jack Aubrey and Stephen Maturin novels, which idealize friendship and a detached but amused view of the world while also offering historical adventure in the 19th century British navy.

Probably, you could gauge my character very accurately, not only from the common nature of these books – none, notably, have a modern or mundane setting – but also from the number of times I’ve re-read them. White I’ve re-read at least a dozen times since childhood, and Tolkien – the last time I checked – over 33 times, a number that astonishes me as I write it. By contrast, I have only read the twenty or so novels in O’Brian’s series three times through, but, then, I came to them much later that the other two, and they probably amount to two or three times the words of White’s and Tolkien’s classics. I’m re-reading O’Brian now, savoring favorite lines (“Jack, you have debauched my sloth”) and finding new subtleties.

The chances are, I’ll re-read all three – to say nothing of other favorites – many times in the rest of my life. However, I doubt I’ll re-read any of them as many times again as I already have. As I grow older, I am more jealous of time, and more aware of all that I have still to read. In fact, probably a new book has to impress me more than my classics did before I’ll re-read it in preference to moving on to something new. But a change of heart or a prolonged illness might change that, and, even if they don’t, I still expect many hours of pleasure ahead with my old favorites.

Read Full Post »

“Let us now compare mythologies”
-Leonard Cohen

One advantage of blogging, I find, is that it reveals my personal mythology. A single entry might not do so, but if I look over a few dozen entries (something I rarely do, because the urge to edit and improve is almost irresistible), a definite pattern starts to emerge.

When I talk about mythology, I’m not talking about lies. Rather, I’m talking about myths in the anthropological sense – stories that explain where you come from and why you do things in certain ways. In this sense, whether a myth is true or a lie is only of secondary importance. What matters is whether the myth sustains you and gives a sense of identity.

For instance, to an American, does it really matter whether America was settled by the best and brightest from older countries? Or, to a feminist, whether a prehistoric universal matriarchy ever existed? You can examine and even debunk such stories – and there can be a certain satisfaction in disproving what everybody knows – but you won’t be thanked, and your proofs will not be welcomed (if anything, you’ll be pilloried). What matters is not the objective reality of myth, but the sense of identity it gives a culture or an individual. If a story helps to sustain identity, that is all that really counts.

So what is my personal myth? Looking through blog entries about my past, I’d say it could be summarized in five words: triumphing after a bad start. Or, in a single word: endurance.

Time and time again, the narrative I tell about myself begins with me doing something badly. Often, I am humiliated by how inept I am. But I am determined, and through perseverance, I make myself competent and even highly skilled where I was once inept.

Considering this story more closely, I find that it has all sorts of implications. For one thing, it’s not a story tied to a particular group or set of circumstances; instead, it’s about attitudes and applicable to a number of situations. Since I’ve always considered myself a generalist with a broad array of interests, I’m fascinated to find that view reflected in my personal myth.

For another, it’s about education – again, not surprising considering that I’ve always believed in education for its own sake, and research is what I currently do for a living.

But what I find most interesting is that my myth that emphasizes persistence. It make no claim to my brilliance or talent. Natural ability isn’t even a consideration. Instead, it’s about learning from mistakes and not giving up. Learning to speak, learning good handwriting, becoming a high school running champion, finding the right profession – time and time again, the story I tell myself is about plodding along until I do or find the right thing.

That’s not surprising, I suppose. The one story I knew about someone with my name when I was growing up was the one about Robert the Bruce learning persistence from a spider. And, as a distance runner, I had concrete knowledge of the importance of endurance, because it wasn’t speed or even strategy that won races so much as the ability to keep going. But, until now, I hadn’t realized how deep-rooted such values were inside me.

In fact, I’m not sure that this is a myth I would have consciously chosen for myself. It has limits, such as a distrust of anything that comes too easily. Perhaps, too, it suggests a lack of confidence, and an expectation of failure the first time. It certainly dropped me into the worst stress that I have ever endured in my life.

Nor, now that I have opened up the myth to examine it, am I completely sure that it is always true. I can think of exceptions to the myth, and, looking back, I think I can see places where I have tugged the raw material of my life to make it fit into the myth better, like the corner of a sheet on a bed. In other places, I suspect I’ve exaggerated or even made up things out of whole cloth.

Still, for better or worse, the myth is mine. And like all myths, what matters in the end is that, on some level, I’ve made it a part of me.

Read Full Post »

Today, I suddenly realized that I was enjoying swimming – enjoying it immensely. The reaction comes as a surprise for several reasons.

To start with, I learned to swim under what I remember as the most miserable conditions when I was a child. In my mind, all the swimming lessons I took in the local outdoor pool occurred in the pouring rain and freezing cold, when all I really wanted to do was stay huddled in my towel in the cabana where the class met.

To make matters worse, I was a poor learner. Or so I thought, because I took forever to struggle up the hierarchy of lessons. It was only in my last year of lessons that I had an instructor who was built like me, with an long torso and short calves, and that I realized that much of what I was learning was useless for anyone of our build. The instructor taught me some alternate kicks that actually worked, so I could tread water for the first time in my life.

Yet, even then, I didn’t care much for the crawl, which was the dominant stroke in those days. I found the swift glimpses above the water disorienting, and I didn’t care much for the sensory deprivation of swimming in general. For years, my main technique was a modified breast stroke that kept my head above water.

Then, just to make me even less inclined to enjoyment, I started swimming regularly a few years ago when I realized that I needed a more varied exercise regiment if I hoped to save my much-battered knees more wear. After years of long-distance running, swimming was definitely second best, and something I endured more than I enjoyed.

Several things have made me change my mind, though. For one thing, after swimming daily since the Victoria Day weekend, I’ve reached the point where I fall into a rhythm while doing my laps, and don’t have to think about what I’m doing. It’s only at this point, I’ve learned from other exercises, that working out stops becoming a grim duty. However, I’ve reached that stage every summer for the past few years without more than mildly enjoying my swimming on most days.

But, over the past couple of weeks, the weather has turned hot suddenly, without any gradual build up that would let me get used it. Walking from an air-conditioned building to the outside, I can feel the heat wrinkling away from me as though it’s a skin that I’m shedding, and, after a run or a session on the exercise bike, my singlet is a sweaty mess that disgusts even me. Under these conditions, the coolness of the pool is luxurious. When I duck my head completely under, a delicious ring of coolness seems to encircle my forehead and temples.

Most importantly, this year I’ve been under considerable stress for several months. While most of the time, sensory deprivation seems hellish to me, as I cope with stress, this year it’s relaxing. In fact, it’s so relaxing that I’ve dropped my modified breast-stroke for the proper thing, dipping my head into the water and coming up for air. Propelling myself face down along the pool, I can see reflections from the sun, like a shimmering chain link fence of gold along the bottom, and not much else. Now, it’s a glorious sensation, being cut off from much of my usual sensory input while feeling my legs and arms moving in rhythm.

I’ve got to the point now where I can swim two kilometers, and, although my muscles know they’re had a workout, I feel like I could easily do as much again. I especially like the solitary feeling because the gym where I ride the exercise bike is usually so full of inconsequential chatter and posturing.

What I will do when the pool in my townhouse complex closes in the fall, I don’t know. But I’ll want to make some effort to find another convenient pool for the winter months.

Read Full Post »

At the start or end of my morning run, I often meet one of my neighbors running for the bus. He works as an on-call English instructor at various institutions around the city, and often gets the call to teach at the last moment in the morning. He doesn’t seem to mind, except for the irregularity of his pay, but I never meet him on his way to work without being thoroughly thankful that my own days as an itinerant instructor are long-past.

For the last few decades, most people with a graduate degree who hope to have a career in academia spend at least some time as a sessional instructor, scrambling for each semester-long contract, and often scrambling between community colleges during the work week to cobble together something like a regular pay cheque. My own experiences include a semester when I bused out to Fraser Valley College (as it was then) two days a week, and more than one time when I had fourteen hour days in which at least two or three hours were spent travelling. That’s time that I could have dearly used for marking or lesson preparation.

Sessional instructors have the lowest rank in academia, and everything about their working conditions reminds of them of the fact. Often, they don’t get a teaching assignment until a week or less before the semester starts – sometimes the night before. They get paid half what tenured faculty get, and often do twice the work, since they frequently teach lower level classes with more students. Most of the time, they have to share offices – or even study carrels in a crowded room. Officially, they don’t get paid for research, yet, if they don’t publish, they have less chance of being hired. Similarly, they are looked down on because their focus is teaching, but, unlike a tenured professor, they lose their position if their student evaluations are poor. Their rehiring is at the whim of their department, which means that wise sessionals will waste hours at every meeting and function, even though they have no voice. Yet they endure all this in the hopes that one day they’ll rise to the height of being a lecturer – which means they’ll be doing the same work for about the same pay, but not having to scramble for it. Meanwhile, they dream of winning a tenure position and dwelling in the halls of academia forever.

Sessional work is especially hard at the community colleges. For one thing, the classes are larger than at universities, and more assignments are required. For another, community colleges – even now, when they have morphed into degree-granting institutions of a kind – are often the continuation of high school under another name. Faced with the choice of finding a job or going to college, many middle-class kids will immediately register for college, which is cheaper than university and easier to treat lightly. For the sessional instructor, that means that the lesson that works in the more serious atmosphere of university has to be largely remade for use at a college. In fact, when I retreated after a few years into teaching only at university, the first thing I noticed was how much lighter my work load became. And I needed the respite, because, although I was young and healthy, the work was steadily grinding me down, especially since I needed to teach year round in order to keep above the poverty line.

Despite these disadvantages, I loved the work, especially dealing with the students. I was kept going, too, by a vague promise at one university that I would eventually be hired for some kind of full-time position. Dozens of Baby Boomer teachers would be retiring any day now, I was continually told – and when they did, I would be first in line for their jobs, because I had established myself as an effective teacher.

Then, slowly, the die started being weighted against me. My non-dogmatic approach to criticism was out of fashion with the then-dominant Post Colonialists, and, although I muttered jokes about being the token humanist, I was increasingly looked at askance. Then the chair changed, and the new one announced that, instead of reserving sessional positions for those who have proved themselves, the department would use the positions in order to trade favors for its grad students at other universities. Suddenly, my income became precarious. And, right about then, I noticed that, when tenured staff retired, they were either not being replaced or else being replaced by relatively lowly lecturer positions.

Seeing the writing on the wall, I made the jump to technical writing. From there, I was so busy leapfrogging into marketing, consulting, and eventually journalism that I’ve had little time to look back. If my new work was just as unsettled, I appreciated that it paid much better, although ordinarily I have only a minimal interest in money. It also had new challenges, such as taking on responsibility for large projects and developing customer relations.

Still, when I do look back, I sometimes wonder where I would be if I had stayed on the fringes of academia. Then I look at my neighbor and other people who started as sessionals the same time as I did, and I have my answer: In exactly the same place that I was.

Read Full Post »

The Myers-Briggs Type Indicator, in all its official and unofficial forms, is one of the most popular personality tests in the world today. It is widely used in by human resource managers, student counselors, and by just about any other kind of person who needs to assess others. Indirectly, it frequently determines whether people are hired or promoted, or get the break for which they have been waiting. Yet, for all its widespread use, I remain deeply skeptical of the basic concepts behind Myers-Briggs testing. Not only does it seem too simplistic and scientifically unsound, but, if my results are any indications, it fails to give a consistent enough picture of personality to make it reliable.

As you may know, Myers-Briggs testing is based on four axes: Extraversion (E) and Introversion (I); Sensing (S) and Intuition (N); Thinking (T) and Feeling (F), and Judging (J) and Perception (P). The poles you are closest to can be put into a four-letter abbreviation that summarizes your general tendencies, such as INFJ.

But why these four scales, I have to wonder? Does anything in the study of psychology suggest these four dichotomies over any others? Why, is there not a fifth category, such as visual versus written learning methods? Or physical versus mental activity, or types of organization?

While some additional implications are added to the four axes in the less superficial versions of Meyers-Briggs, the basic structure seems largely arbitrary. The categories are clearly not based on anything impartial. If I show people four blue circles and two red squares, those who are not color-blind or altogether sightless will agree on what they are seeing, but I doubt that people unaquainted with Meyers-Briggs will naturally divide personality into four types, or that, if they do, their four types will correspond to Meyers-Briggs. That’s why, for all the popularity of Meyers-Briggs, many psychologists do not give it much credence.

Similarly, when I notice that Meyers-Briggs gives sixteen basic types of personality, my first reaction is that at least that’s four better than horoscopes manage. The comment is unfair, since people can be anywhere along these four scales, which provides for more variety, but the impression of over-simplicity remains. What, I always wonder, is not being measured by Meyers-Briggs? And what is distorted because it registers on the tests but the tests are unable to diagnose it successfully?

As for the either/or questions that make up many forms of Myers-Briggs testing, don’t get me started. For many people, these are false oppositions. Albert Einstein, for example, preferred solitude for work, but could be extremely gregarious when his work was done. I suspect that most people do not think in terms of either/or on many questions of preference, either – I certainly don’t.

Another point that is deeply misleading is the common contention that Myers-Briggs testing is based on the work of Carl Gustav Jung. Being one of the few people I know who has ever read Jung, I strongly suspect that this claim stands only because almost no one is acquainted with Jung.

In fact, the main precursors to Myers-Briggs that you find in Jung are some simple diagrams with two axes (Intuition / Sensation and Feeling / Thinking), and a separate discussion of extraverts and introverts. The addition of Judging/Perception is the work of Myers and Briggs, and so is the codification – Jung was throwing out conceptual ideas rather than ones that could be observed and given a score. These conceptual ideas can be useful – which is why “extrovert” and “introvert” became part of everyday English – but Jung shows no signs of seeing his axes as something that can or should be directly analyzed. In many ways, Myers and Briggs have gone so far beyond anything that Jung intended that the insistence that their work is in any way Jungian seems nothing more than a rather desperate attempt to evoke the name of one of psychology’s great names to bolster a rather dubious theory. In other words, it’s a marketing ploy — and, personally, I tend to mistrust anything with misleading advertising.

However, the real reason I distrust Myers-Briggs testing is that my results can vary widely, not only from test to test, but also from day to day. Looking at the first four online tests I found when searching under “Myers-Briggs,” I received four results when I took them one after the other: ENFJ, ENFP, ESTJ, and INFJ. Similarly, taking the first one on two separate days, my results were ENFJ and INFJ. For the second test, on subsequent days I registered as an ENFP, INFJ, and INFP.

These results do show some general patterns. For example, I definitely register as relying on Intuition more than Sensing, Feeling more than Thinking, and Judging more than Perception. However, Sensing, Thinking, and Perception do show up, depending on the test I take and the day I took it. As for the extravert/introvert distinction, I seem evenly divided, even when other aspects stay the same.

Possibly, I am more mercurial than most people, or my personality is close to being balanced on the four axes. However, the fact that different variants of Myers-Briggs produced only broadly similar results, and none of them could produce consistent results from day to day makes me incline me to suspect that the problem lies either in the tests or their theoretical framework. If Myers-Briggs was an accurate indicator of personality, then surely it would have some way to register people whose temperament varied. Furthermore, if the results corresponded closely to any objective reality, then more consistency should be present.

Of course, none of these tests were the official Myers-Briggs ones, but online ones whose thoroughness and reliability are questionable. However, I have taken Myers-Brigg tests in the past under more formal conditions, and their reliability was no better. So, again, I suspect the tests are the problem.

At any rate, common sense and direct experience both cause me to be highly skeptical of all forms of Myers-Brigg testing. Like I.Q. tests, Myers-Brigg tests seem to be dubiously conceived, and far more influential than their equally dubious results would warrant.

Read Full Post »

In the last few days, I’ve had several experiences that make me think about my role as a journalist in the free and open source software community:

The first was a reaction I had from someone I requested some answers from. Although I thought I was being polite, what I got back was an attack: “I am not prepared to answer any of these questions at this time. The intent of your article is to feed the flames and I will have no part in that. The fact that people like you like to stir up controversy is to be expected, since that is the job of any writer trying to get readers.”

This reply not only seemed presumptuously prescient, since I hadn’t written the article, or even decided what angle it would take, but also unjustifiably venomous, given that I didn’t know the person. Moreover, although I am in some ways a contrarian, in that I believe that questioning the accepted wisdom is always a useful exercise, when I write, I am far more interested in learning enough to come to a supported conclusion or to cover an interesting subject than I am in stirring up controversy for its own sake. The fact that an editor believes that a topic will get a lot of page hits is meaningful to me mainly because the belief sets me loose to write a story that interests me.

Still, I don’t blame my correspondent. He probably had his reasons for his outburst, even though they didn’t have much to do with me. But the fact that someone could react that way says some unpleasant things about some current practioners of free software journalism — things that alarm me.

Another was the discovery of the Linux Hater’s Blog (no, I won’t link to it and give it easy page hits; if you want to find it, do the work yourself). I don’t think I’ve ever come across a more mean-spirited and needlessly vicious blog, and I hope I never do. However, recently as I’ve been preparing stories, I’ve come across some commenters on individual mailing lists who were equally abusive. They are all examples, not only of what I never want my work to be, but the sort of writing that makes me scrutinize my own work to ensure that it doesn’t resemble them in anyway whatsoever.

Journalism that stirs up hate or encourages paranoia — or even journalism whose focus is sensationalism — is journalism played with the net down, and I’m not interested in it. Oh, I might make the occasional crack, being only human, or use the time-honoured tactic of saying something outrageous then qualifying it into a more reasonable statement. But, mostly, I prefer to work for my page hits.

Such sites also suggest that the line between blogging and journalism is sometimes being blurred in ways that aren’t very complimentary to bloggers. While some bloggers can deliver professional commentary, and do it faster than traditional media, others seem to be bringing a new level of nihilism to journalism.

A third is the unexpected death of Joe Barr, my colleague at Linux.com. Joe, better known as warthawg or MtJB (“Mister the Joe Bar,” a story he liked to tell against himself) encouraged me with his kindness when I was first becoming a full-time journalist. Later, when I started writing commentaries, his editorials were an indicator for me of what could be done in that genre. As I adjust to the idea that Joe isn’t around any more, I’m also thinking about how I’ve developed over the last few years.

The final link was a long interview – almost twice my normal time – with Aaron Seigo, one of the best-known figures in the KDE desktop project. One of the many twists and turns in our conversation was the role of journalism in free and open source software (FOSS). As Seigo sees things, FOSS journalists are advocate journalists, acting as intermediaries between FOSS projects and the larger community of users. He wasn’t suggesting that FOSS journalists are fan-boys, loyally supporting the Cause and suppressing doubts; nothing in his comments suggested that. But he was pointing out that FOSS journalists are an essential part of the community. In fact, much of what he said echoed my own half-formed sentiments.

Seigo also discussed how a small number of people making a lot of noise can easily deceive journalists who are trying to be fair and balanced by making the journalists think that the noisily-expressed beliefs are held by more people than they actually are. As he points out, the American Right has been very successful in this tactic, especially through talk-radio. He worried that part of the recent user revolt against KDE 4 might be due to something similar.

Listening to him, I tried to decide if I had fallen for this ploy in the past. I decided that I might have been, although usually I try not just to be thorough, but also analytical enough to sift down to the truth.

I was going to try to summarize what I had learned from these four separate experiences, but my efforts to do so only sounded sententious – to say nothing of self-important and over-simplified. But I’m thought of all four as I’ve exercised recently, and I’ll be thinking of them for some time to come, too.

Read Full Post »

I won’t wear a T-shirt that advertises a product or a company. The way I figure, if I’m going to be a walking billboard, you’re going to have to pay me – assuming you can coax me into doing it at all. The closest I come are T-shirts advertising a cause I support, such as the Free Software Foundation, or a small band or art exhibit I happen to like. But, in reaction to the trend towards the billboard T-shirt, my preference is for T-shirt art that is a joke to me, but is obscure to most other people.

During the 1990s, one of my prize possessions was a Miskatonic University T-shirt. Readers of H. P. Lovecraft will recognize the name of the university whose faculty often explored the supernatural, and whose library contained a private collection full of deadly occult lore, such as the Necronomicon. The shirt showed some pseudo-classical buildings with tentacles coming out of the building, and students fleeing from it. Since I was a sessional instructor at the time, I wore that T-shirt around the English Department a lot (which, come to think of it may have something to do with the fact that I parted from academia; undoubtedly, the rather humorless chair thought I was making a statement – and, looking back, I suppose I was).

A few years ago, my favorite obscure T-shirt was from the Linux Journal. On the back, it read, “In a world without fences, who needs gates?” Members of the free software community will recognize that as part of a longer comment that used to be common in many people’s email signature: “In a world without walls, who needs windows? In a world without fences, who needs gates?” No doubt Microsoft’s legal counsels would like to eradicate the comment, but the lack of capital letters leaves the reference open to interpretation. Whenever I wear this T-shirt, someone is sure to come up to me on the street and congratulate me on it, but most people walk past it blankly.

Another favorite of mine reads simply, “++ungood” (read “double plus ungood”).The slogan is Newspeak from George Orwell’s Nineteen Eighty-Four. As you may recall, part of the motivation behind Newspeak was to regularize and simplify English so as to remove certain tendencies of thought. Specifically, rather than use a list of comparatives like good, better, best, Orwell’s language for totalitarians reduced them all to variations of good. Similarly, rather than having “bad” as a separate word, Newspeak reduced it to the opposite of good. So, “++ungood” means “bad” or, more accurately “wicked,” and carries a political overtone of “politically undesirable” as well.

But my latest acquisition is the most obscure of all. It comes courteous of Ben Mako Hill, an executive of the Free Software Foundation and a strong advocate of free culture, who kindly put the artwork online for anyone to use free. Meant to resemble the exercise gear issued by universities that students once stole but can now buy as souvenirs in most campus book stores, it reads “Property of Pierre-Joseph Proudhon.” Proudhon, as every scholar of anarchist philosophy (and nobody else) knows, is the political writer who coined the phrase “Property is theft.”

What I like about these obscure T-shirts (besides the polite but puzzled look down at the T-shirt shop where I get them made up) is that, although most people don’t get them, they are often excuses for people to start talking to you. And when someone does understand them, you know that you have at least some small thing in common with them. So, I foresee my obscure T-shirt collection growing.

Read Full Post »

I’ve always jumped around from job to job, but this time I’ve topped myself. I’ve taken the job of Galactic Emperor, six millenniums from now.

Or, to be exact, I’m playing the role of the Emperor Simonides in the Imperial Realms game for which I did some writing last year. Steve Bougerolle, whose project it is, offered those who have contributed to the game the chance to be immortalized in this way, so I agreed.

I did think of standing in for Basileus III, the emperor notorious for enobling his pet cats (and demoting one to Baroness when she scratched him), but, I thought Simonides a better match. After all, for all my eccentricity, I’m not likely to give titles to animals. But Simonides, who helped revigorate the empire by mobilizing against the Nano threat sounds like a steady, personally austere type of organizer I might at least hope to emulate.

I’m especially pleased because Simonides is one of the emperors whose accomplishments I specified while I was writing about the aliens and human clans in the game (the history is Steve’s).

One small problem with Steve’s idea is that I don’t look very Imperial. But a studious type like Simonides – whom I imagined while writing about him probably had an office right behind the throne room where he spent most of his time – I might just be able to pull off (in the dark, with a group of near-sighted people who had forgotten their glasses). A warlike emperor would be harder for me to pull off with even marginal conviction.

Still, the thought of someone as solidly working class in origin as I am playing an emperor of any sort amuses me more than I can say, so I’m making sure that everybody knows of my elevation. If you’re a friend, and you haven’t received an invitation to the coronation all I can say is that, next time, you’ll know better than to slight me, won’t you?

Read Full Post »

Every once in a while, blogging delivers an unlooked-for personal insight. I had an example of this occurence earlier this week, when I mentioned that, despite being left-handed, I had won handwriting awards in the first few grades at school. Suddenly, I realized that this experience helped to explain my interest in typography.

The connection was news to me. I thought I had developed an interest in typography when I was working as a technical-writer, and wanting to branch out into design. Partly, my motivation was to make myself more versatile and therefore more employable, and to add a extra bit of creativity to what was sometimes monotonous work.

However, I soon became fascinated with typography for its own sake. Not many people – including graphic designers – are well-versed in typography, but the selection of typefaces and their arrangement on the page is a minature art-form, full of arcane jargon and fascinating lore.

It’s hard to imagine now, for instance, that the rise of asymmetrical design was as controversial as Impressionism or Modernism in the arts – or that one leader of the so-called New Typography, Jan Tschichold, was considered so subversive in Nazi Germany that he was given the option of exile or imprisonment (he chose exile, first to Switzerland and after the war to England, where he designed the standard templates for Penguin books of the period – little gems of design that you can still find today, sometimes, in second hand book shops).

And, like any art form, once you’re comfortable with the language of ascenders and descenders and kernings and letterspace, typography changes your perception. Just walking down a street of shops became a whole new experience for me as I examined all the signs in a new light. Similarly, opening a book, my pleasure is substantially increased by a fine layout, or lessened by a poor one.

These are all reasons enough for the large collection of fonts I accumulated. However, I suspect now that my font-fetish is also a revival of attitudes formed in the first years of my education.
You see, I was left-handed, and no one expected me to write with any elegance to my letters. The very fact that we read left to right makes writing awkward for lefties, and letters in cursive script especially are easier to form when your pen hand isn’t in the way.

But, having conquered a speech defect in Grade One, by the time I was introduced to handwriting in Grade Two, I was determined to defy expectations again. By an effort of will that, looking back, I now find hard to credit in a seven-year-old, I focused on the forms of the cursive letters, drawing them repeatedly over and over at home in my own time until I could draw them perfectly.

Or so I thought. I wonder now if I won handwriting certificates as much because I did better than lefties were supposed to do, rather than because my handwriting was objectively among the best in my classes. Unfortunately, I don’t have a sample of early handwriting to confirm or deny my suspicions.

No matter. What is important isn’t whether I really deserved the certificates, but that I became interested in the shape of letters for their own sake. I remember doing class presentations on the Greek, Phoenecian, and Norse alphabets. And, well into my teens, copying out the final version of my essay (this was before personal computers) became a ritual all its own. I remember labouring over the letter forms, not much concerned with what I said, but determined to produce a beautiful page. In Grade Ten, I even did a calligraphed creative writing project that I did and redid many times, and only completed because of the deadline – and I laboured at least as much over the page borders as I did the story contents.

Those interests went unexpressed as I went through university and became an instructor then a technical writer. Even when I was a member of the Society for Creative Anachronism, I never did much of anything with calligraphy. Yet, like a root buried deep underground, the interest remained, waiting for the right conditions to send up tendrils and be reborn.
Odd, that I never saw the connection from now. The continuity and persistence confounds me – yet, in seeing them, I now know a little bit more about myself.

Read Full Post »

(This article was originally published on the IT Managers Journal site. Now that the site is no longer active, many of the articles are no longer available, so I’m reprinting some of the ones I wrote to give them a more permanent home)

Everyone knows that Napoleon’s invasion of Russia failed because of the winter, right? But the truth is, saying that is as incomplete as saying that the cause of every death is heart failure. The winter may have been the final blow to Napoleon’s grand design, but it need not have been.

The more you look at Napoleon’s Russian campaign, the more you realize that it ran into trouble long before the first snowfall. The campaign actually failed because of difficulties in scaling, combined with poor management by Napoleon himself. His example provides a case study of the pitfalls when planning any project, especially large ones, making it an object lesson for the modern corporate world.


A bigger team isn’t always a better one

To invade Russia, in 1812, Napoleon assembled an army of 700,000 — probably the largest army up until that time. It contained some elite forces, but it was never an efficient fighting force. For one thing, its members spoke too many different languages to communicate well. Many parts of the army were traditional enemies of other parts, or had been fighting them recently. As a result, the army never cohered into a whole.

Also, its size meant that provisioning it was difficult. It could not even live off the land, as other French armies under Napoleon had done, because no area contained enough food for so many people. Instead, it had to keep moving, so quickly that its members were always well ahead of any supply wagons and frequently starving.


Ask yourself if you have the right resources and preparation

Contrary to a popular misconception, Napoleon did not go into Russian completely blind. He had maps, and he made considerable effort to stockpile food, resources, and horses. His instincts were sound, but he had no firsthand knowledge of what he was about to face.

On a map, it looked perfectly sensible to plan to use particular roads. But what Napoleon couldn’t see was that many of the roads in Russia were too narrow for the quick movement of large numbers of troops, and too muddy for artillery and supplies to pass. Nor could he see how foraging in a country as poor as Russia would be next to impossible.

Similarly, the far-sightedness of gathering supplies meant nothing if they weren’t the right ones. Napoleon gathered countless pieces of small-bore artillery, but these were a nuisance to haul, and useless in sieges like the one at Smolensk or artillery duels like the one at Borodino. Nor did he consider stockpiling pointed horseshoes suitable for travel in snow or winter clothing.

Still, even if he had gathered the right resources, the effort would have been largely useless, because among the things he neglected was any plan for delivering supplies to where they were needed. The one efficient piece of transport he arranged was his personal mail service, which could deliver a letter from Paris to Moscow in 14 days — and that luxury was unimportant in the campaign.


Decide on a goal and focus

Despite all his preparation and the size of his army, for once in his life Napoleon was uncertain what he wanted to do when he invaded Russia. Did he mean to occupy Moscow and Saint Petersburg? Carve up Russia between Sweden, Turkey, and a revived Poland? Force Tsar Alexander into a truce and go on to take India from the British? In the early months of the campaign, Napoleon considered all these goals. Unable to make up his mind, he could not act with his usual decisiveness, and failed to followup on his initial victories until after he had lost the initiative.

This oscillation continued throughout the entire campaign. Days, if not hours, before he retreated, he remained uncertain whether he would leave Moscow or winter there. Even when he abandoned Moscow, he first moved south as if planning to face the main Russian arm — then abruptly veered west to begin the long trek home. Not knowing what he wanted to do, he was, not unexpectedly, unable to do much of anything, or to do what he did do effectively.


Keep in touch with your team

Part of Napoleon’s leadership ability was his rapport with his troops. By moving among them, Napoleon could always get a first-hand feel for their morale and combat-readiness, and sense when punishment or a gesture of concern could improve the mood of his troops. Often, his appearance alone could inspire troops.

However, during the Russian campaign, this hands-on approach was rarely possible. At times, Napoleon was too ill. Yet, even when he was healthy, the size of the army and its dispersal meant that he could only use his personal touch on a minority of his troops. Too often, the only troops he saw were the Imperial Guard, those with the most loyalty and highest morale who, because of their proximity to him, were also the best-fed and supplied. Judging the rest of the army by the Guard, he assumed all was well as the rest of his army steadily sickened and dwindled away.


Don’t force subordinates to misrepresent or lie

Early in the campaign, Napoleon told his generals and field marshals that he wanted accurate reports about their troops. The trouble is, when they told him about the lack of supplies and the problems with desertions, Napoleon was prone to abuse them, sometimes publicly. At times, these tirades were followed by demotions or reassignment to difficult duties.

Faced with such consequences, Napoleon’s management soon realized that, for their own sakes, the last thing they should do is tell him the truth. Early in the campaign, they began exaggerating the strength and readiness of the forces they commanded. These exaggerations prevented proper planning and caused Napoleon to under-estimate the extent of the campaign’s difficulties until they were far advanced.


Choose substance over PR or positive thinking

All his life, Napoleon believed in his destiny, trusting it to carry him through times of trouble and upwards to future greatness. For much of life, this belief served him well, possibly because the bravado that it produced constantly took his opponents by surprise.

But in Russia, where geography, weather, and distances were as much a problem as the opposing armies, there was too much reality for a positive attitude to conquer. Napoleon did his best to assert his will and frequently issued proclamations that pretended all was well or about to be so, but this stance became harder and harder to maintain — especially since Napoleon was intelligent and observant enough to be unable to deny the increasingly obvious truth. Still, for a long time, he persisted in believing that will would triumph over circumstance. Unfortunately, by the time he admitted what was happening, his army was crumbling and in an exposed position cut off from supplies. At that point, he had nothing to do except retreat.


Listen to experts and subordinates

Napoleon’s entourage included people whose knowledge could have countered many of the difficulties listed above. For instance, Caulaincourt, the former ambassador to the Tsar, warned him about the conditions of the roads, the poverty that eliminated the possibility of foraging, and the political situation that made it likely that the Russians would continue to fight, despite constant retreats and uninspired field leaders. But these were not opinions that Napoleon wanted to hear, so he ignored them until it was too late. It was only on the retreat, when Caulaincourt advised Napoleon to return home ahead of the army that he listened to him — and then, the main reason was probably that Caulaincourt was saying what Napoleon wanted to hear.


Have a fallback plan

For several weeks beforehand, Napoleon knew that a retreat was a strong possibility. The alternative was to winter in hostile, barren territory. Yet, perhaps because of his reluctance to admit failure, Napoleon made no plans to prepare for the retreat. New conscripts arriving from the rest of Europe were still hurried up the line to Moscow, the farthest point of the French advance, rather than being assigned to secure possible routes. Similarly, no supplies were stored along the way. Nor were any scouts sent along the possible routes. When Napoleon made his sudden decision to retreat, he had made no preparations for it, which undoubtedly worsened the disaster of the march.


Sometimes, resources have to be abandoned before they become a liability

As the French army advanced into Russia, it carried a variety of unsuitable equipment, including hundreds of light field guns and carts that were unsuitable for the roads. Instead of destroying this equipment or abandoning it, Napoleon insisted on dragging it along. This stubbornness served only to slow his advance.

Even worse, on the retreat, the army was carrying as much of the loot from Moscow as possible. Not only did the loot encumber soldiers by filling their pockets and weighing down their belts with the objects hanging from them, but it also encumbered the army’s various vehicles, making them harder to move. The path of the retreat was soon littered with abandoned riches, but the determination to hang on to their spoils killed thousands of soldiers as they tried unsuccessfully to dodge marauding Cossacks or to hurry on to the next outpost where they might hope finally to get a meal.


You can lose by winning

Napoleon never lost a battle in Russia, although many, like Borodino, were indecisive. By traditional standards, he should have won, since he occupied Moscow, the old capital and still the largest and most important city, and his troops started home with fabulous riches. Yet he found his soldiers, horses and supplies steadily whittled away by disease, desertion, starvation, and exposure. After despoiling Moscow, he had to retreat, constantly harried by an enemy he no longer had the cavalry to close with. Unable to grasp fully the kind of war he was in or to adjust his tactics, he abandoned his army to rush home to France. In the end, only 22,000 — a little over three percent of his original force — survived to do the same.


Conclusion

Napoleon was a brilliant leader, one of the most outstanding ones of all time. But the fact that even someone of his caliber could make such mistakes only emphasizes that anyone can falter. Lack of preparation and focus, a belief in his own infallibility, a refusal to assess the situation objectively, a failure to follow the leadership techniques that had served him so well in the past — all these things led to the greatest catastrophe of his career.

Napoleon took another three years and one exile and a return before he was finished for good. But his Russian invasion had drained the fighting strength of France and destroyed his reputation for invulnerability. After his Russian campaign, his rule was a constant struggle for survival against continually increasing odds. Finally, at Waterloo in Belgium, he met the Duke of Wellington — a man famous for his clear headed planning and lack of nonsense — and lost his position once and for all. But the beginning of the end was his lack of proper planning when he invaded Russia.

Read Full Post »

« Newer Posts - Older Posts »