Tag Archives: computing

Reviewing my reading habits

It’s occurred to me that I’m becoming an increasingly lazy reader, preferring to read reviews of books than the books themselves. Below are some snippets from the latest to have caught my eye.

Reviews of books about dark Jewish comedians and insightful Australian art critics. Books on how the internet has changed our understanding of knowledge, how word processors have changed literature, and about how art can save us from our bone-deep solitude.

The wondrous critic
The most manifest virtue of these essays is their language, marked by an uncommon command of vocabulary and (in our day) a far rarer mastery of syntax, allied to a thoroughly antiquated respect for the rules of grammar. Open this anthology anywhere and you will be hard put to find a sentence that is not as memorable for its very phrasing as it is for its thought.

The lonely city
She tells us that she often moved through New York feeling so invisibly alone that she felt like a ghost, and so started to think of other ghosts as suitable company. The dead, for Laing, are not so much historical figures as they are very vibrant modern companions, and she invokes them with an ease and familiarity of old friends. She allows Warhol to pop up in the chapter on the web, Hopper to pop up in a chapter on Warhol, and so on. In Laing’s head, all of these artists are still alive somewhere – perhaps even in communion with one another. This thought makes her feel less alone, and she passes it along to us.

Rethinking knowledge in the Internet Age
In fact, knowledge is now networked: made up of loose-edged groups of people who discuss and spread ideas, creating a web of links among different viewpoints. That’s how scholars in virtually every discipline do their work — from their initial research, to the conversations that forge research into ideas, to carrying ideas into public discourse. Scholar or not, whatever topic initially piques our interest, the net encourages us to learn more. Perhaps we follow links, or are involved in multiyear conversations on stable mailing lists, or throw ideas out onto Twitter, or post first drafts at arXiv.org, or set up Facebook pages, or pose and answer questions at Quora or Stack Overflow, or do “post-publication peer review” at PubPeer.com. There has never been a better time to be curious, and that’s not only because there are so many facts available — it’s because there are so many people with whom we can interact.

How literature became word perfect
The literary history of the early years of word processing—the late 1960s through the mid-’80s—forms the subject of Matthew G. Kirschenbaum’s new book, Track Changes. The year 1984 was a key oment for writers deciding whether to upgrade their writing tools. That year, the novelist Amy Tan founded a support group for Kaypro users called Bad Sector, named after her first computer—itself named for the error message it spat up so often; and Gore Vidal grumped that word processing was “erasing” literature. He grumped in vain. By 1984, Eve Kosofsky Sedgwick, Michael Chabon, Ralph Ellison, Arthur C. Clarke, and Anne Rice all used WordStar, a first-generation commercial piece of software that ran on a pre-DOS operating system called CP/M.

Jews on the Loose
In his movie roles Groucho, for Lee Siegel, represents not an amusing attack on pretension but “the spirit of nihilism.” Siegel disputes the view that Woody Allen is Groucho’s descendant, for he feels that “Allen is simply too funny to be Groucho’s direct descendant.” Groucho is—and he is right about this—much darker. “No other comedians of the time,” Siegel writes, “come close to the wraithlike sociopath Groucho portrays in the Marx Brothers’ best films.”

Rather than solely answering our “Should I buy the book or not?” question, these reviews act as companion pieces to the books, whether the reviewer is agreeing with the author or not. The dialogue only adds.

I need to resist the temptation of considering the review as a substitute to the book, though. Maybe I need to find a review of a book about tackling laziness or something…

To err is human, to totally mess things up needs a computer

Here’s a fun article about a guy who accidentally deleted everything from all his company’s servers, including all his off-site back-ups too – in effect, deleting his entire company. As you can imagine, the forums weren’t especially helpful.

Man accidentally ‘deletes his entire company’ with one line of bad code
“Well, you should have been thinking about how to protect your customers’ data before nuking them,” wrote one person calling himself Massimo. “I won’t even begin enumerating how many errors are simultaneously required in order to be able to completely erase all your servers and all your backups in a single strike. This is not bad luck: it’s astonishingly bad design reinforced by complete carelessness.”

A timely article, as there’s a project underway here to look at the feasibility of replacing one MIS with another and how we’d manage the data migration that that would entail.

It’s not just a matter of moving bytes around though. The horrible mess-up above notwithstanding, that’s the easy part. When implementing a new MIS there’s as much people stuff to resolve as techy stuff.

Here’s an interesting essay from the University of Michigan, from a dim and distant past when universities and other large organisations were wanting to move away from mainframes to more networked environments.

Implementing an MIS
The implementation of a management information system can be a traumatic experience. At a minimum, changes in procedures will impact the ways in which plans are made, programs are developed, and performance is evaluated within the organization. New patterns of communications will emerge, and new – presumably better – information will be available to assist in carrying out decision-making and administrative responsibilities. Efforts to improve the MIS may also uncover the need for organizational changes which may be even more unsettling than the procedural changes necessary to implement the system. The introduction of a MIS may represent substantial change in the established way of doing business, which can be viewed with considerable alarm (and generate significant resistance) by those within the organization.

Different technologies, but the same concerns.

I found the principles proposed at the end of the article very interesting, and hope that a similar approach will be undertaken here.

It is important not to oversell the potential of the new system. Aaron Wildavsky offers a number of “rules” that are applicable to the implementation of any new management system. The rule of skepticism suggests that organizational officials should exercise a good deal of skepticism when presented with the initial concept of an improved management system. The rule of delay cautions officials to give the system adequate time to develop and to be prepared to face periodic setbacks in its implementation. As Wildavsky observes: “if it works at all, it won’t work soon.” The rule of anticipated anguish is essentially a restatement of Murphy’s Law – “most of the things that can go wrong, will.” Wildavsky suggests that management must be prepared to invest personnel, time, and money to overcome breakdowns in the system as they occur. And the rule of discounting suggests that anticipated benefits to be derived from the new management information system should significantly outweigh the estimated costs of mounting the system. Much of the cost must be incurred before the benefits are achieved. Therefore, the tendency is to inflate future benefits – to oversell the system – to compensate for the increased commitment of present resources.

And let’s not forget that Hofstadter’s law applies here too, as well.