First, one of the reasons I haven't blogged in awhile is that I haven't been able to think of a sufficient explanation as to why I haven't blogged in awhile. It seems like the first post back after a long hiatus should do something like that, but I really don't want to.

Second, I thought it would be nice to post a link here to my newest public website; I've finally gotten the new UNT International website up and running, and it's been working very well so far. Glad to have it finally public, but the job is far from over in terms of making it really sing.

One of the things I've been working on a lot lately in that project is making the underlying web application code run faster. Since I'm largely self-taught, I have kind of had to figure out for myself the best practices for doing so; some things I expected to work failed miserably, whereas other things I expected not to work helped quite a bit.

For example: my web application, like so many others, is database-driven, meaning that the bulk of the content that appears to the end user is pulled out of a database when needed and incorporated into appropriate HTML templates that are then sent as static content to the user. It's a great model, but there's a lot involved in it: for instance, suppose that (as sometimes happens in my application) one particular piece of content is used more than once in the course of program execution...say, an Article object. The way I originally wrote it, the application was unaware of these kinds of redundancies; that is to say, if Article 3 was needed sixty times in the course of program execution, it was retrieved from the database sixty times.

This struck me as inefficient. After all, there's such a thing as memory; why not have my application store the data for Article 3 in its own memory and then use it from there, thus reducing the total number of database retrieval operations by 59. It'd be kind of like a professor sending a research assistant out to the library for a book one time and memorizing it when the assistant got back, rather than sending the assistant out to the library every time he wanted to reread a paragraph. It sounded good, and I implemented it...and my application, as a result, was sometimes 300% slower.

The thing is, sometimes you have to consider the fact that, regardless of talent, something that has too many jobs will eventually become bad at all of them. Undoubtedly our example professor could benefit in the short term from memorizing one particularly important book, but if he had to memorize all the books he ever used as sources in his research, I highly doubt he'd be able to teach his classes, or even write the paper he memorized the books for. Or, give a circus performer too many plates to keep spinning and he'll drop every one of them, even though he wouldn't have dropped any at all if you'd given him just one less. Delegate too much work to your employees and they'll eventually fail you; fail to delegate enough, and you'll fail you.

Third, I've found that one of the reasons I never post to my blog is that I rarely find myself capable of writing a sufficient concluding statement. Even though what I've said in the body of the post may have been great, I'm never satisfied with the ending.