Sunday, April 24, 2005

Sorry that it has been a while

Of course, I am far from sure if anyone actually reads this blog so it may be that no-one realized that I have been away.

Except, of course, that I haven't been away unless you count a 2 day trip to Germany for a meeting.

I have been busier than a centipede in a shoe shop because I have been trying to write some training materials for my company on what do do in conflict situations. I have to be honest and admit that I like writing almost any type of material but this has been an interesting challenge. I have developed techniques that I use as and when required to resolve problems that arise when dealing with internal and external customers. I am sure that everyone who deals with customers has their own set of "tools" that they use for handling the situations that arise. However, it is surprising difficult to formulate my behavior in to a set of rules that can be taught.

There were some things that stood out as key factors and I believe that these are true outside of the field where I work. Here are my thoughts on some of the things that are universally useful.

1. It is necessary to understand what both sides want out of any exchange. Ideally, you need to understand why both sides want these things as there may be a solution that the other person doesn't know about which you can suggest. At least you can understand why a person might seem so unhappy when you refuse what appears a frivolous request. Of course, you understand why you want what you want, don't you? Or possibly not. Often we want things because that is what we have always wanted in the past. The reasons for this may no longer apply. There is an old and almost certainly untrue story which I will steal to demonstrate my point. A British company was buying some electronic parts from a Japanese vendor. The British company specified a failure rate of 7 devices in every 10,000 since high reliability was required - and this had been something of a sticking point with the Japanese. When the contract was agreed and the devices delivered there was a small bag clearly labeled and separate to the main contents. The bag contained 7 devices, clearly and individually labeled as faulty.

2. Communication. It may be that there is no resolution that serves both parties. Without good communications, you will never know. Even if there is no solution, it can help a great deal to explain why you are not going to meet the other person's terms. When it is explained, often the other person can see that they wouldn't have agreed either.

3. When resolving a problem, someone will have to have control to make good progress. In a protracted process, control may shift back and forth if that makes sense in the situation. It may even be sensible to discuss control as a discussion moves in to different phases.

4. There is an old salesman adage. You can win an argument and lose a customer. There is some truth in that. It doesn't mean that you always have to cave in but it is worth considering what effect a win at any costs would have. Sometimes it can be better to leave the situation with each of you making a statement of position and agreeing that no mutually satisfactory agreement exists at that time.

Anyway, those were some of the thoughts that I had on the subject. I have no idea if they are of any use to you.

Monday, March 28, 2005

Testing

I would like to discuss some ideas about testing and maintaining systems that I feel strongly about. This might seem like an odd subject to feel strongly about – after all, documentation and testing are the dull bits of any project. Clearly I am biased by the nature of my job since I am often called upon to handle a crisis where an enterprise level application has gone seriously wrong. The situation is always an emergency. It almost invariably involves serious financial loss. In rare cases, it has put people in danger. The first priority is to get the system running again, or as is more commonly the case, limping along.

There are two things that generally cause these emergencies and they are very often found in the same project. The first is poor design. The second is poor testing. Generally poor design is found during testing. If your testing is bad enough then it may be discovered the day that your system goes live. If you find yourself holding the baby when an enterprise solution goes down, your best hope is that you have a good design that has been implemented badly. That may result in some late nights chasing bugs but it is fixable. A botched design may require an almost complete rethink, salvaging what code can be saved from the old system.

Let us consider how much testing an application needs. The answer is that it very much depends on the application. At one extreme, you have a simple application which is to be used rarely by few people and where the results of failure are trivial. A typical example of this is a tool used by a single developer, or a group of his peers, to perform a limited task where it is possible to get the same results in a different way. Let us imagine that you have a little in-house command line tool that tells you what the copyright strings embedded in a DLL are. This tool clearly needs minimal testing and fault reporting is built in since you can pick up the phone and talk to the guy who developed it. At the other end of the spectrum, we have military applications where an error can cost thousands or millions of lives. You really, really don’t want a false positive or negative in an application designed to detect a first launch of nuclear weapons. That sort of software can not be over tested.

For most of us, the applications that we develop fall somewhere in between these extremes. Most of us write applications that are used in the commercial world. The impact of these systems failing is normally a financial loss. For a line of business application, the loss can be significant.

How much testing needs to be done depends on both the complexity of the application and the seriousness of any resulting failure. However, projects generally go over time and budget. When the code is written, it is tempting to cut testing to the bone to get at least close to schedule and after all, the code is good. It was written by smart people. How likely is it that there are hidden bugs? I sometimes think that smart people are the most dangerous kind since they can find new ways to mess up that we stupid folk can’t imagine.

It is accepted wisdom that applications should be rigorous in their error checking and should never assume that the parameters offered to them from an external source are valid. This is simple good sense. However, test data is almost always carefully selected to only contain valid data that will pass the tests. The real world is seldom so cut and dried as that. Real data will contain errors. So should test data. If nothing else, test data should contain errors for reasons of code coverage. If you test with only good data then you are ignoring many, many code paths.

Testing broadly falls in to two categories – functional and scalability/performance. Let us consider functional testing first.

Functional testing is testing to ensure that the expected results are obtained for sample inputs. That is to say that the program does what it was intended to do. Virtually no-one tests that the program does not also do what it was not intended to do. If an application slowly leaks memory then it may be months before anyone notices. If the leak is slow and small then it might be a minor bug that can be ignored. Not all bugs are that benign. I recall one bug that was very nasty and took 6 months of my life to find and fix. The application did what it was supposed to do. It also overwrote some system memory (this was not a Windows application). After a particular sequence of operations was carried out 6 times, the program became unable to access the filing system on the machine. The reason was that a side effect of some faulty logic overwrote successive chunks of some operating system structures until the OS filing system failed. So, whenever possible, test that an application doesn’t do anything that it shouldn’t as well as does what it should. There are a number of tools to help you do this under Windows and hopefully also under Linux. For Windows, the cheapest is the application verifier from the applications compatibility toolkit – a free download from http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnanchor/html/appcompat.asp . It won’t catch logic errors but it will catch a lot of bad system calls that would cause you grief later – including a lot of the ever popular heap corruption errors. I don’t use Linux so I can’t recommend any tools for that but I am sure that there must be some. Whatever your operating system, the time for such errors to show up is in the test lab rather than on the live server.

I think that now would be a good time to talk about system testing and unit testing. A unit is a component of a system. Let us imagine for a second that your system contains a component which processes a customer order. It works with the billing module. It works with the presentation layer to build a confirmation record for the customer. They are not written yet. The temptation is to wait until they are done and then test them all together. After all, they have to work together so that makes sense, no? Well, actually, no. Let us imagine that we have these three systems all written 4 months down the line. You kind of remember how the order processing component worked. You hook them up after fixing a problem where Bob was working on a different version of the interface specification but never mind, you are ready to test. Great – we are system testing. You test and something doesn’t work. You have no idea what! You have 3 completely unproven components and you are using them to test each other. You could step through in a debugger and follow what is happening but that will take a long time. Let us assume that you pull a couple of 90 hour weeks and get it working pretty well with the test data. Your boss seems happy again and all is well. At that point, how much of the code have you run in that subsystem? 40%? If you were careful with your test data, maybe you have run 70%. Do you feel happy releasing a product where at least 30% of the code has never been run? Would you like to bet your business on it? If you would then may I recommend spread betting? For the rest of us, I recommend unit testing. You build a harness and test the component in isolation. This is often rejected as being too time consuming and I can see why. It does take time to build the harness and test but it saves time later. If the software is critical, you may need to simulate errors to find out just what happens if that memory allocation fails.

Think of it like shopping. You are going to have to pay for what you get. If you pay cash, it is immediately visible and you feel the pain. Alternatively, you can put it on your credit card and your bank balance looks fine. However, you always pay in the end and the longer you delay paying, the more it costs you. System testing components that are well unit tested is always cheaper and easier. It is a classic case of a stitch in time saving nine.

Now, let us talk about system testing. One thing to recognise is that systems rarely have one release and no changes ever. It is an old joke but there is some truth in it. The user does know what he wants and will tell you the moment you deliver what he asked for.

Ad hoc testing is good. Automated testing is even better. The great thing about automated testing is that it can be reproduced consistently. If you fix a bug that had no UI changes then the test script that ran last time should run just as well now. That doesn’t mean that it is fully tested, but it does mean that you haven’t broken anything fundamental. Some parts of the industry call this a smoke test which is originally a hardware testing term. You turn it on and see if it starts to smoke. Automated tests can take longer to create than ad-hoc tests but they are a gift that gives on giving. There are several test tools out there and it would be unfair of me to recommend one but perhaps a little *rational* thought would single out one that *rose* in your mind when you needed to *test*

Scalability and Performance testing are normally the last types to be done. Please note that I say “are” not “is”. These are very different things as some developers have learned at great cost in late nights and missed deadlines. Performance testing shows how fast the code runs for one user. Scalability shows how well the application copes with many users and/or much data. Performance can be important when designing a game, a compiler or a system that is essentially single user. Scalability can be critical with a modern three tier application. Consider the fastest growing area, namely online thin client solutions. Ebay, Amazon, your bank and a hundred other companies do more and more of their business online. If you are launching an electronic banking solution, it is not going to be a happy day when you find that your system can not be split over multiple servers because of an error in its architecture. Functional testing can start early on with unit testing; Scalability testing should start as early as possible even if you have to simulate large sections of the system. Scalability issues are often fundamental to the design of a system and accordingly fundamentally difficult to fix if you make a bad decision. To make matters worse, you normally discover scalability issues when your application is about ready for release. In the case of some developers, you discover them shortly after your application has gone live when it almost immediately goes dead again because more than 100 people have tried to use it.

So, what is the relationship between scalability and performance for a web application? Let us imagine a web based application which displays a catalogue and accepts orders. That should be easy to imagine as there are hundreds out there and everyone re-invents the wheel. Performance is when it takes .1 of a second to order. Scalability is when the response time is less that .8 of a second when 500 users make an order. The two goals are often in conflict. This is something that application developers often discover when they start writing server applications. In a thick client application, it is a great idea to cache information and preload it for the user. When you have 200 users, that stops being a great idea. It may well be better to go to the database each time and let the database designer worry about how to handle the requests efficiently. That is normally a safe thing to do as the big database manufacturers have spent millions of dollars in tuning those database engines.

There are various tools which are good at testing these systems. Application Centre Test which comes with Enterprise flavours of Visual Studio is one and there are multiple third party solutions. Ideally, these should be used with multiple client machines attacking a single server since multiple instances of a test tool are not at all the same in terms of timing. I would advise aggressive testing as well. Test for expected load. Test for wildly optimistic load just in case your solution turns out to be what the world was waiting for. Test until failure – push the box or better yet the boxes until they fall over. If at all possible, spread the server load over more systems than you initially expect to run on and repeat. Learn where the bottlenecks are and open then up - and repeat the tests. Ideally, everything should fail at levels you will never reach with all components giving up at about the same time. Oh, it is a good idea to test with realistic data if you can so that you don’t find that you are looking at better performance because you always hit the cache or worse performance because you have an artificial hotspot in the database. Remember that real data contains errors and users who disappear without warning. So should your test data.

Finally, a few words about keeping these systems running after they are first developed. Systems generally fall in to two types. The first is systems that will be live for a long time and will evolve as the business changes. This is a very common scenario. The second is one off systems written for a special event such as an election or the Olympics (which I ignored earlier). These don’t live long but they are very critical while they last. In both cases, I recommend that there should be a test server (or cluster as the case may be). This is also sometimes called a staging server. This server should be identical to the real one. It should have copies of all the supporting servers. It should be in a position where it could be switched live in a few minutes if needed. Except when it is being used for specific testing, it should be identical to the live server and you must be ready to revert it to the state of the live server or send it live. You need this. The alternative is to test any changes in the software on the live business with no fallback plan. You also need it to act as a backup for a worst case scenario failure or even to share load if a system has a surge in demand.

I have often explained this to wise and intelligent people who have been shocked by the idea. They always ask me the same question in an appalled voice - “Have you any idea how much that would cost?” As it happens, yes, I do have a pretty good idea of what it would cost. How much not doing this would cost a business in the worst case varies a great deal. I have known companies who were losing over 1/3 of a million dollars a day because they didn’t have such a setup. The debate about what to do took 3 days. Delivery and set up of the hardware took 3 more. The system had already been down for a few days. All in all, the lack of the test server cost them about $3 million. The test system cost a them about $6000 plus time from their systems admin staff. Even if the losses had been 1% of what they were, the test server would have been cheaper.

So, testing may not seem very exciting. I would agree that it is not. However, not testing properly is very exciting indeed. If you live for the thrill and don’t care about how difficult it is to get another job then feel free to go for the excitement. For the rest of us with a mortgage and 2.4 children, I can not recommend good testing well enough.

Friday, March 04, 2005

Some of my guitar collection

My current collection contain of:

Starchild. Once a humble Telecaster, now a work of art courtesy of an artist called Alan He will paint a guitar for you as well if you give him large amounts of money and a degree of freedom. Alan likes painting images of girls who are suffering from a shortage of clothes. Please contact me if you want him to do work for you. This guitar is equipped with two Seymour Duncan pickups that give it a classic sound.

Excalibur. Not a sword but an axe if you will pardon the term. This guitar was built for me by Rick of Sticks and Strings in Thatcham, Berks, England. Again, this is very similar to the Telecaster design of Fender (bless you, Fender and please don’t sue me) and equipped with Kent Armstrong pickups. Like Excalibur, this bears the Voodoo logo since they come from the same stable.

Fender Stratocaster. A stock Mexican model. I haven’t changed this one at all because it was exactly perfect for me as it arrived. There is a line in a Meatloaf monolog “I once killed a boy with a Fender guitar. I don’t remember if it was Stratocaster or a Telecaster … It had a heart of chrome and a voice like a horny angel”. Silly boy. That is clearly a Stratocaster. I prefer the Mexican version to the US version as the pickups are a little brighter. This is an exceptional example.

Epiphone Les Paul. In honey sunburst. Again, a stock guitar. It is pleasant enough but it always leaves me feeling a little unsatisfied. It plays well and sounds just fine but somehow the magic isn’t there. Sometimes I just can’t get no satisfaction.

Epiphone SG400. This is a stock guitar except that I had the pickups replaced with Seymour Duncan’s. The result is rather pleasing. The intent was to have Gary Moore on the neck and Carlos Santana on the bridge but it is actually rather good for laid back jazzy blues. The Bigsy Tremelo looks so good but it serves no purpose since I never use it.

Tanglewood Baby. A tiny guitar that thinks that it is a Martin or a Taylor. This is a joy to play with a low action and a sound full of life and tone. It sounds as if it is a dreadnought (if perhaps a tad quiet) but is smaller than an electric. I can take it anywhere and the sounds will stand up to any fair measure. I am sure that this is due to the Stika spruce soundboard and the Mahogany back, sides and neck. The fingerboard is ebony. It is a little neck heavy but I can forgive a few faults in a loved one.

Brunswick Acoustic in Sunburst. A good all round strumming guitar with a lot of top end. It isn’t remarkable but it is very playable and I rather like the look of it.

Yamaha Pacifica 112… well, it was once. This is a guitar that I got second hand from the lead guitarist of a Nu Metal band. It plays well and has a versatile range of tones thanks to the unidentified Seymour Duncan pickup in the bridge. It is a good guitar for casual play or for loaning to friends… after all, a ding or two won’t really spoil it. I have come to like the appearance even if it is very rough and ready. I can accept it for what it is.

Tanglewood shark. I liked the look (i.e. Shark shaped with a paint job to match) and it has a built in amp and speaker. Points for: It looks fun and can be played without an amp. Points against: It is horrible to play and has a sound like a banjo being played through a baked bean tin. It is true what they say. A fool and his money are soon parted. Even plugged in to a class A tube amp, it still sounds appalling.

Kramer Striker in cracked black ice. This guitar says ROCK. No, it screams it. It also demands Spandex outfits and big hair. The neck pickup is a hot humbucker. The bridge pickup is a Quadbucker – 4 rail pickup in one. If you need a little more output (and who could????) you can have both neck and bridge pickups on at the same time. 6 pickups anyone?

In a change of pace, a Yamaha classical guitar. Perfect for those more contemplative moments. It is an older guitar and that has given it a degree of maturity. It is less than ideal for playing “Smoke on the water”.

Ninja Katana Strat copy, This is a bit of a mystery. It is Strat shaped with knock-off pickups. It plays well enough and it at least 20 years old. It is cheaply made but somehow plays and sounds very well indeed.

Danelectro doubleneck. This is the version with a 6 string and 12 string necks rather than the 6 string and bass variant. It has more jangle than a Rickenbacker and is simply fun. Astoundingly, it is also a pleasure to play and sounds wonderful on both necks. The lower neck is essentially a Danelectro U2

A Danelectro U2. Danelectro guitars were originally sold via the Sears catalog and were designed to be cheap and light. For this reason, they were made of Masonite (like suitcases) and pine or chipboard blocks. They have a type of pickup known as a lipstick pickup because they were made of surplus lipstick tubes. Given this, you would reasonably expect them to sound horrible. Remarkably, they sound good. The tone is unusually toppy with little sustain. They can sound very good when you want a lot of brightness in chords or lead work.

Tuesday, March 01, 2005

Today, I will be talking about guitars, a subject close to my heart as I have quite a collection of them. I will describe the evolution of the modern guitar and mention some of the more interesting items in my collection.

I should briefly describe the parts of a guitar as I will be referring to them throughout this article. A guitar has a body and a neck and a head which is sometimes called the headstock. The strings are attached to the body and vibrate between a point on the body called the bridge to a point at the end of the neck called the nut. There are raised strips on the neck called frets. Notes are selected on each string by pushing the string down to the neck which effectively shortens the string to the length between the fret and the bridge. This changes the wavelength of the vibration in the string and so increases the frequency. Hollow bodied guitars normally have one or more soundholes to project the sound forward.

Guitars have existed in various forms since the 15th century. Of course, these were acoustic instruments which relied on their body shape for sound projection. The elements that we would expect are all there in the early design but with some interesting differences. The body of the guitar was longer and thinner than a modern guitar and the strings were gut rather than the more familiar steel or bronze of modern guitars. The sound would have been like a quiet classical guitar that uses nylon strings. The strings were tied to the bridge and to the tuning pegs at the head of the guitar. One obvious difference was the number of strings. Early guitars had 4 or 5 courses of string – a course could be a single, pair or triple string. Modern guitars have 6 strings except for bass guitars and some of the electric guitars used for heavy detuned rock. 12 string guitars are actually a revival of an older idea since they are 6 courses of 2 strings, each tuned to the same note but in the case of some strings, tuned an octave apart. Original guitars had bowl (curved) backs – a concept revived by the Ovation guitar company. The frets were generally strips of gut tied around the necks of the guitar. The fret allows accurate pitching of notes as I will describe later.

From the original design of the guitar, many variants have arisen. Here are some of the main types of which there is a bewildering number:

1. The Classical guitar

This is the clearest descendant of the original guitars. These are typically figure of eight shaped guitars with the upper part of the body narrower than the lower – they have a pronounced waist as most acoustic guitars do. They are nylon strung and have a fixed bridge with no method for altering the intonation which I will explain when I come to electric guitars. The front of the guitar body forms a soundboard as is usual with acoustic guitars with the back of the guitar having a lesser influence.

Cheaper contemporary models will use a composite (plywood) soundboard. More expensive models will have a high quality spruce top. The soundboard will typically have a pattern of bracing struts hidden out of view. Various woods are used for the back and side. Frets are almost invariably metal on modern guitars. Classical guitars do not normally have a pickguard which is a plate for protecting the top of the guitar from pick scratches since classical guitars are usually played fingerstyle. Fingerstyle is where individual notes or sometimes pairs of notes are played rather than entire chords – and they are played with the fingers rather than a pick as the name suggests. Classical guitars produce little volume and were generally restricted to parlour use until amplification became available.

2. The Spanish guitar.

The Spanish guitar is similar to the classical except that it has one or perhaps two plates on the soundboard that can be used for tapping to produce percussive effects. There are also two variants used in Mariachi called the vihuela and the guitarro which are respectively smaller (and higher pitched) and larger (and lower pitched) than a standard guitar. Unlike the classical, Spanish guitars are sometimes strummed to play chords as well as being played in the classical fingerstyle.

3. Folk guitars

These are a very common type of guitar. Outwardly similar to the classical guitar, these have a large body (especially the type known as dreadnought) and have metal rather than nylon strings. To take the additional stress, the necks of folk guitars have one or rarely two truss rods – metal rods which contain an adjustable screw allowing the tension of the neck to be adjusted against the pull of the strings. They can be played like a Spanish guitar although they are not normally struck for rhythm effects. They are often played with a flatpick or plectrum to strike the strings harder. Folk guitars (often just called acoustic guitars) are much louder than classical guitars and can fill quite an area if played with vigour as many buskers have demonstrated. Some acoustic guitars have a cutaway which is a change in the lower shoulder of the guitar to allow better access to the frets nearest the body. There are two common forms of cutaway, the Venetian and Florentine with the former being much more common. They are often played with a plectrum or flatpick and accordingly have a pickguard. A plectrum is simply a teardrop shaped bit of plastic although other materials are sometimes used. It is used to pick a string or strum multiple strings and gives a stronger and cleaner attack than a guitarist’s fingers resulting in a cleaner and louder sound.

4. Resonators

There has always been a quest to make louder guitars and so fill a larger space or to stand out in a band. Acoustic guitars are relatively quiet when compared to instruments such as drums and brass instruments and are easily lost in the “mix”. Before electronic amplification, mechanical methods were used to produce more volume. One such was the resonator design. A metal cone (or three in tri-cone designs) was placed in the body of the guitar with the bridge driving the cone. These are significantly louder and have a harsher, brighter tone much admired by devotees of early Blues music. Many of these are intended to be played on the players lap with a side or steel instead of conventional fretting. Pedal steel guitars are a further evolution that I will cover separately in the “electrics section”. The first resonators were seen in the 1920s but they are still made today for fans of older musical types. Some resonator guitars have all metal bodies with some of the finest being made of bell brass. If you have ever wondered about the strange chromed guitar on the cover of the Dire Straits album “Brothers in arms” then you will have seen and probably heard a resonator.

5. Lap steel guitars

As mentioned in the last section, resonator guitars are sometimes played on the player’s lap. This has been further developed in the lap steel and Hawaiian guitars which are normally unplayable in a conventional manner. Lap steels range from conventional looking guitars with square neck profiles to electric examples which are little more than a plank with strings and a pickup. Lap steels are always played with a slide. Rather than the player’s fingers forming chords by fretting a combination of notes, a slide or“steel” is used. This may be augmented by some fretting one or more notes with the fingers but that would be unusual as most music arranged for slide guitar is designed to be played on instruments where this is not practical. The slide or steel is generally a metal or glass tube that the player wears on one finger and slides up and down the strings to alter the pitch. This gives a smooth change of pitch that is quite characteristic. This was first done by Blues players who used a knife for the purpose. Later players (who presumably wanted to keep all their fingers attached) started using cut and smoothed parts of glass bottles for the job. One interesting point is that the length of the string is determined by the position of the slide and not by the frets. My own lap steel has frets that are only printed since the string will never come in to contact with them. They are often played with finger picks – essentially false fingernails used to pluck the string more firmly than would be possible with real fingernails – they are typically metal or hard plastic.

6. Travel guitars

Acoustic guitars are fairly bulky. If you go to any busy railway station, you will see people carrying or wearing guitars. In an attempt to make guitars more portable, travel guitars have been developed. Various systems have been tried from the very small bodied guitars such as the Martin Backpacker to guitars which can be disassembled like the SoloElite Dragonfly. Most of these win admiration for their novel design rather than their tone. The Martin Backpacker in particular has a disappointing sound considering that Martin is among the finest acoustic guitar makers in the world. Interestingly some guitars which would previously be described as parlour guitars are now being marketed as travel guitars. A parlour guitar is just a conventional acoustic guitar with a small body. I have a Tanglewood Baby that was marketed as a travel guitar. Happily, it is an exception to the rule that small bodied guitars have inferior tone. As the name suggests, it is a small bodied variant of a larger but similar guitar. The Baby has a “toppier” tone which is to say that the higher pitched overtones are dominant in the sound which I personally prefer.

7. Electro acoustics

These are guitars that are essentially acoustic guitars which have been fitted with a microphone to enable them to be amplified. Guitars of this type often have a small graphics equaliser built in to the body. It is also possible to retrofit a microphone in to a conventional true acoustic which popular kits such as the Fishman acoustic pickup. The microphone is normally mounted in the soundhole or at the base end of the body – the two mountings provide different tonal qualities and there is much debate over which is the better position. Guitars of this type suffer from feedback problems when played with high amplification – the soundboard vibrates from the amplified sound which is picked up by the microphone and sent to the amplifier which plays the sound which makes the soundboard vibrate… and so on until the sound dissolves in to a distorted howl. This feedback effect is often used in rock where solid body guitars predominate – the effect is much easier to control in solid body guitars.

Electric Guitars

Electric guitars are the mainstay of modern Rock, Blues, Funk and much pop. They range from electro-acoustic guitars to wonders of electronics. They use magnetic pickups to convert vibration in the string to a signal for an amplifier. Electric guitars are very quiet indeed when played unamplified with solid body guitars being effectively silent since they have no sound cavity to project the sound. Electric guitars are always steel strung and have a truss rod to counter the pull of the strings - except for some Rickenbackers which have two and some early Telecasters which had none. Most statements about guitars seem to have an "except" clause.

Conventional magnetic pickups have one pole per string – the strings have a steel core and the movement of the string through the magnetic field of the pickup creates a small current which is then sent to an amplifier and on to one or more loudspeakers.

8. Semi-acoustics, sometimes called archtops

These are guitars which are hollow bodied but have magnetic pickups. They may be similar in appearance to conventional acoustic guitars but generally have a thinner body. They very often have a classic appearance since the more avant-garde designs work less well – and the more avant-garde designers have generally favoured the solid bodied rock guitars. Classic examples of this type of guitar are the Epiphone Casino, the Gibson ES-335 (the ES stands for Electric Spanish) and the Rickenbacker 360. Technically, these guitars can be played acoustically since they are hollow bodied but the sound is too thin and quiet to be useable. They suffer from the same feedback issues as the electro acoustic designs. It was this problem that led to the development of the solid body guitar – a definite improvement on the previous solution of stuffing towels in to the hollow spaces of the guitar. These are the guitars most commonly used in Jazz. Their construction is similar to that of acoustic guitars.

9. Solidbody electric guitars

These are available in a bewildering variety of styles and colours. As a guitarist, I can confidently state that they all sound and feel different. To most non-guitarists, they all sound pretty similar. They all have the same basic design – a solid slab of wood for a body with one or more magnetic pickups. The differences between different models may be cosmetic or more fundamental.

Wood type

The type of wood affects the density of the guitar body and the acoustic properties. While solid body guitars do not project the vibration of the plucked string, the vibration travels within the body of the guitar and resonate giving complex overtones. Generally, the denser the body, the “darker” the tone. Guitars such as Gibson’s Les Paul models use Mahogany with a maple cap and have a characteristic dark tone when compared to guitars made of Ash or Alder which are much less dense woods. These woods are used on guitars such as Fender’s Stratocaster and Telecaster which have a brighter tone. It is always difficult to describe guitar tone as English (and probably other languages) lack the words required to capture the subtle differences. Denser, harder woods also give more sustain – the note takes longer to die away after being sounded. How desirable this is will depend on the style of music being played. The least dense guitar that I have played is the Danelectro U2 which is made primarily of Masonite, a material more commonly used for suitcases. This gives a remarkably bright jangly tone.

Pickups

There are three types of pickup reasonably commonly used on electric guitars: Single coil, humbucker and Piezoelectric. Some guitars have more than one type fitted to give a wider range of sounds. Pickup selector switches allow one or more pickups to be selected at the same time. Only the first two of these are magnetic pickups. These have a magnet per string (or a single large magnet on some cheaper models) wound with coils of wire which sit in the body under the strings. These coils of wire pick up the current generated by the ferrous string moving in the magnetic field created by the magnet. Single coil pickups are used to excellent effect on guitars such as the Telecaster and Stratocaster. They are generally paired with lower density woods. Single coil pickups have perhaps the most guitar-like of the tones available from magnetic pickups but they suffer from electrical interference. The Stratocaster is famous for its 50 or 60 Hz hum (depending on location) when no note is being played. The humbucker was developed to eliminate the hum. It does this by having paired couples of coils which are arranged to be out of phase to cancel out the interference. This works well but gives a different tonal response to the single coil – and a higher output which is much prized by rock guitarists. The thicker tone of the humbucker is generally paired with denser woods and classic humbucker equipped guitars include the Gibson Les Paul and Gibson SG (short for Solid Guitar). Some guitars which are intended to for heavy rock are equipped to allow up to 8 pickups to be used at once to give ludicrously high output levels. Piezo-electric pickups are quiet different. They are normally built in to the bridge of the guitar and use crystals to convert pressure (and accordingly vibration) in to voltage. The output of these crystals is very small and it is normally necessary to have active electronics built in to the guitar to convert the signal to a usable level. Piezo pickups give the most faithful reproduction of the vibrations occurring within the guitar. They are the least common type of pickup. There are also MIDI pickups but they are very rare. I will mention them again when I come to the future of the guitar.

The Whammy Bar / Vibrato / Tremolo

This is the handle like projection that you may have seen on electric guitars. The original purpose was to add a little vibrato to a played note or chord. They are not used so much these day except in heavy rock where a technique known as the Divebomb is used. The way that the whammy bar works is that it allows you to alter the tension on all the strings at once very rapidly. A little tug and the pitch swings up. A little push and the pitch drops. A Divebomb is a huge drop, sometimes making the strings go completely slack and hit the pickups with a bass thud. The first systems were the Fender 3 screw system and the Bigsby system. Both had disadvantages. The Fender system often caused the tuning of the guitar to drift. The Bigsby system was more stable but allowed less variation. Various refinements were tried with some success at improving both systems. The biggest advance was the Floyd Rose Tremolo which has the strings clamped at both the nut and the bridge giving excellent tuning stability. It is a floating tremolo which means that it can travel both up and down so it is both stable and highly variable. One common type of tremolo seen on modern guitars is the hard-tail tremolo where the pitch can be lowered but not raised beyond original pitch because the bridge is mounted flush to the body of the guitar. This is slightly cheaper to build and rather more stable.

The bridge and the mounting of the strings

Various approaches have been used to attach the strings to the body. Acoustic guitars use plastic pegs to hold the strings in to the body and a single bridge. Electric guitars have the strings going through holes in the bridge – in the case of some guitars such as the Telecaster, though the body of the guitar as well. While acoustic and classical guitars have a single bridge, electric guitars have three or six saddles that make up the bridge. These essentially are segments of the bridge that are individually adjustable to allow the intonation to be adjusted. A string that is in tune when played open (no note fretted) may be out of tune when played fretted at the 12th fret (one octave higher). By small adjustments to the saddle, the length of the string can be adjusted to give a good compromise so that the string isn’t far out of adjustment at any point on the neck. It has always struck me as strange that the most expensive custom made acoustic guitars don’t have any adjustment at all for this while the cheapest and nastiest electric guitar does.

Tone controls

The tone control of most guitars is classic 1950’s electronics. Essentially, the tone control allows you to bleed off some of the higher frequencies. Internally, it is simply a variable resistor (a rheostat) and a small capacitor.

Styling

For many guitarists, look is as important as function. The style of the guitar will have a lot to do with the sort of music that the guitarist is playing. Heavy Metal guitarists will want something in black with spiky bits. A blues player will want something more traditional. The most common designs are based around a few classics. The Telecaster is a guitar shaped slab with a single real cutaway. It has always been a popular shape and has recently enjoyed a return to fashion. The Stratocaster is the most copied shape of all with its wonderfully retro design that was intended to look futuristic in the 1950s. Instead it conjures up images of classic American cars. It is a design classic loved by many stars such as Jimmy Hendrix, Mark Knopfler, Eric Clapton and countless others. The Les Paul is a small bodied guitar, mercifully so as the thick mahogany body is very heavy. The classic versions of this body type are the flame top (so called because of the tiger striped grain on the maple cap) and the gold top. Again, this is a hugely popular shape that has been copied many times. Any number of heavy rockers have used them including Zakk Wylde with a striking black and white bullseye design. At least 70% of the guitars that you see in a shop will be based on these designs.

The future of guitars

Most guitars are essentially based on 1950’s technology but there are some interesting advances in the wind.

Gibson have built a Les Paul which has an Ethernet interface to connect directly to a network. Applications are so far limited but computers are increasing the recording solution of choice. It is yet to find a mass market.

MIDI pickups are now built in to some high end guitars and are available as a rather costly after market add-on. With the pickup and an electronic box of tricks, the guitar can output MIDI and with a synthesiser mimic a wide range of instruments including electronic and wind instruments.

Perhaps the most radical change is the modelling Guitar. “Line 6” produce the Variax range that have some very advanced digital signal processing (DSP) built in. The design hides this behind conventional controls so that the guitar appears like a guitar and not like the computer that it really is. The most striking visible difference is that there are no magnetic pickups. The pickups are actually piezoelectrics built in to the bridge. The signal from these is fed in to the DSP and converted in to very good copies of classic guitars. With the guitar selector knob and the pickup selector, you can select from an entire stable of classic guitars including some very rare acoustics. It is a remarkable bit of engineering.

Sunday, February 13, 2005

In my last blog entry, I talked about why aging made good biological sense. It doesn’t seem quite so attractive to the individual. We would like to live longer and not age.

One contributory factor is that we lead unhealthy lifestyles. We are sedentary and we eat too much salt, fat and sugar. Again, this is the result of the conditions that we lived in before we developed a civilisation. Sugar and fat are good energy sources. They were in short supply when we were plains hunter gatherers. If you found some fat or sugar, you ate it in order to improve your chances of making it to the next mealtime. They were not any better for you then than they are now but thy were not something that individuals encountered often enough for it to be a major problem. This is especially true when you consider how short the lifespan of the hunter gathers was – 30 years was a good age. 50 was ancient. A disease that might kill you at 60 was no threat at all. As for salt, it was good for protection against sun since we lose salt when we sweat. That was handy when we lived under the baking African sun but less so when we live in air-conditioned homes.

Interesting, the life expectancy of a male newborn in 1900s America was 49 years. Maybe we are closer to the hunter gatherers than we like to think. In evolutionary terms, they are the same species. Give one a shave and a sharp suit and you would pass him on a street without a second glance.

We also ingest all manner of toxins for their psychotropic effects. We are not alone in this as some other mammals will eat spoiled fruit for the alcohol.

So, as we get older, we tend to gain fat that we don’t need and fur up our arteries with fatty deposits. We are fighting back against this with gyms and drugs called Statins that reduce cholesterol but prevention is better than cure – and I speak as someone who eats too much fat and doesn’t spend enough time in the gym.

Not all species show aging. The common factor seems to be that creatures that never stop growing (such as lobsters) do not show aging and don’t generally die of old age unlike creatures that reach a final growth such as humans.

So, why do we stop growing? Well, there is a regulatory mechanism. Our DNA has tags on the end, called Telomeres. When a cell divides, so does the DNA. I am sure that you will have seen the ball and stick DNA models. In division, one side of the strand goes to each of the cells. The other half of the DNA is then reconstructed so that each cell gets a full strand of DNA – and this is a pretty complex thing with all manner of clever error checking. Now, telomeres are on the end of DNA and DNA division knocks off one of a chain of telomeres. If this can’t happen then the replication fails and the cells die. This limits the number of divisions. The length of the telomere chain is a key factor in cell aging. Old cells have short chains. Young cells have long chains.

So, those pesky telomeres deserve the blame? Can something be done? Well, no, they don’t deserve any blame and yes, something can be done. Unlimited cell division is not a good thing. We need and have mechanisms for regulating this. When it fails, you get uncontrolled replication and that is called cancer. They are cells where something has gone very wrong with the growth regulation. So, to the question of whether anything can be done about the shortening of telomere chains. Something is done – we have a repair mechanism that uses an enzyme called Telomerase which tacks new telomeres on the end of the chain. Some forms of cancer are associated with cells producing their own telomerase.

Could we use telomerase to rejuvenate cells without causing cancer? The answer is yes and no. We can fix up the telomere chains and that will help a lot. However, DNA does get damaged, both in the normal life of the cell and in cell division. Ionising radiation and free radicals all damage DNA. We have mechanisms to detect damaged DNA and even to repair it but they all rely on the mechanism of detection and repair working properly. However, we know that the DNA is damaged in cases where repair is necessary and accordingly, the cell is in an abnormal state. The detection and repair mechanisms may fail and let errors through. How dangerous these errors are will vary a great deal. Some DNA appears to be redundant and errors here probably don’t matter at all. An error to the replication control mechanisms could be fatal not just to the cell but to the organism. This is a place where we must tread very softly indeed. Let us assume that the chance of a dangerous error was 1 in 10 million. 10 million cell divisions really isn’t that long in biological terms.

There are also other factors – not all cells reproduce but they still age. That is because the cells have complex mechanisms that rely on complex chemicals song as lipids. They get damaged just as much as the DNA. When the cell is damaged, it works less well, If it is damaged enough then it dies.

So, what would we need to do at a cellular level to reduce aging? There are a few things that are possible with current technology and a few more which may be possible relatively soon:

- reduce the amount of free radicals. This is actually not that hard to do. Anti-oxidants are good at mopping up free radicals and can be absorbed by humans without trouble. Beta-carotene is just such a compound and it is found in fresh vegetables, especially carrots. So, eat up your greens everyone.

- avoid ionising radiation. Alpha particles are not that bad because they don’t penetrate far in to the body. Beta radiation such as that given out by a conventional television and monitors is a bit worse but manufacturers are increasing keeping radiation levels low. Gamma radiation will go where it wants and there is not much that you can do about it.

- Maybe we can use telomerase to extend cell life and therefore life of the individual

- Maybe we can use Nanotechnology to ungunk blocked arteries and repair damaged structures. If you are unfamiliar with the idea of nanotechnology, it is the production of tiny structures and machines with molecular parts. At present, we only have some very simple building blocks such as a ball (Buckminsterfullerene), tubes (carbon nanotubes) and a design for a toothed gear wheel. It is safe to say that it will be a while before this technology will blossom but it shows great promise.

- Recombinant DNA. It is just becoming possible to patch up DNA using a virus to replace a damaged section of human DNA with a fixed segment. Again, this technology is in it infancy but it is possible that it could be used to repair errors in DNA , bit by bit and so reduce aging.

A child born today can reasonably expect to see his or her 80th birthday.
Maybe their grandchildren will see their 200th.

Next: Something completely different.

Aging - why and how

Much of what I am going to say here goes against Christian doctrine and Feminist ideals. For those who believe that Genesis is the literal truth, I would suggest that you probably don’t want to read this. For the feminists, I sympathise. What I will describe here is about how we are made. We can change how we think of each other and how we act but we can’t change our genes or our past.

People get older. We are all familiar with that and the outward signs. Skin becomes less elastic and wrinkles. Hair loses colour. In men, there is hair loss to a greater or less degree. Women do not experience significant hair loss as it is an effect of male hormones. Women undergo the menopause and there are attendant changes such as osteoporosis. These are macro level changes. However, we have to consider what the human body actually is. We are a collection of cells that cooperate, a complex organism composed of simpler but still complex organisms. To really understand aging, we have to consider how aging affects the cells.
Let us consider why we age. We age because we do not repair ourselves as fast as we are damaged by our environment. This is not terribly obvious at a macro level although you will often hear comments about how the young heal faster. If you are no longer in the first flush of youth (and I am surely in this camp) then this might seem an unfortunate feature of the human body. However, it exists for reasons that make good sense from an evolutionary perspective.

The purpose of an individual from a species perspective is to make more individuals. From our own personal perspective, we might regard our purpose as to play a perfect flute solo or make a million dollars or write the last great novel. From a biological perspective, our purpose is to breed. Moreover, we have evolved to do tolerably well when resources are not so plentiful. We are relatively cheap to build. We could have evolved with better repair systems but that isn’t an evolutionary advantage since each individual would require more resources. Picture this:

Subspecies A (repairers) have good repair mechanisms and can live indefinitely barring predation and accident. For a given level of resources, they can produce 2.1 offspring for each couple. They begin to reproduce at age 30 since they will mature more slowly.

Subspecies B (breeders) have poor repair mechanisms and can live for up to 30 years barring predation and accident. For a given level of resources, they can produce 3.0 offspring for each couple. They begin to reproduce at the age of 14 since they will mature more quickly.

After 14 years, generation 2 of Breeders is on its way. By year 28, the second generation is working on a third. At year 30, the first generation of Breeders is on its way and the first generation of Breeders is dying off. Allowing some time for gestation, that gives us 6 of Breeders (2 died) and 4 of Repairers (none died). Clearly, if unchecked, Breeders will do best.

What happens if we allow for predation and accident? Let us assume that deaths among the young can be replaced because this is what happens in nature. Well, this tilts the balance further in terms of Breeders. Because each individual is “lower cost”, loss of an individual has a less effect. Loss of individuals who have already reproduced hardly has any effect because they would be dying soon anyway and will produce no more offspring. Because the individuals age, it will be mostly the very young and the older members who are caught by predators or freeze in the winter – the young are quickly replaced by Breeders. Repairers will not lose seniors preferentially because of their superior repair mechanisms but they will still lose the very young and they cost more to replace due to the higher resource needs of a fully self repairing mechanism.

So, in both those cases, the cheaper but short lived species wins. What happens if we give both groups a hard time and reduce resources? Repairers can only produce 1.5 individuals per generation and Breeders can only produce 2.5 individuals per generation. Both can still increase but let us factor in accident and predation. Let us assume that .5 individuals per generation are killed in 20 years. This will mean that many of Breeders will die before reproducing and the population will struggle to remain at its previous level. Since Repairers do not reproduce until much later, many more of them will die before reproduction. Again, Breeders do better. This model is conservative since many species can produce more than 4 individuals per generation.

So, although it doesn’t suit the individual to age and die, it does suit the species. If we look at populations of species, we se that this pattern is followed everywhere. Insects are cheap, short lived and there are many billions of them. Rats are a little more expensive and live a bit longer and there are billions of them. Elephants are expensive, long lived and there aren’t very many of them.

Ah, but hang on! There are billions of humans now and we are not that much cheaper than elephants. Well, yes, that is true. We cheated. By developing intelligence, we have changed to numbers in our favour. By cooperation, we have almost eliminated predation. We build shelters against the sun. We wear clothes against the cold. We plant crops so that there is food in the lean times. In evolutionary terms, this happened an eyeblink ago and we still have the same design at cellular level that we always did have. We are slow breeders, not repairers.

Actually, there is an interesting little wrinkle with humans. Human females live longer than males, well past reproductive age. This sounds like a glitch but it isn’t. Evolution works on groups as well. Intelligence and experience can help a group survive. Groups where there were still older females around did better and so passed on their genes. Why older females rather than older males? Well, because males were more exposed to danger and so were less likely to grow old. In species terms, this makes sense as well. Human gestation is 9 months. Single births are the most common case. One woman can bear one child at a time but a man can be the father of multiple children by different mothers at any one time. If anyone is expendable, it is the men, not the women. Of course, it is handy if the man is around to do his share of looking after junior but in a cooperative society, the offspring will probably survive anyway. So, men were preferentially exposed to danger and older men were more likely to get trampled/gored/eaten than the younger men.

So, that is where we are. In the next entry, let us continue and have a close look at cellular biology.

Saturday, February 12, 2005

First word

Welcome to TangledThreads, the blog of an unremarkable man.

If there is an established protocol for how to start a blog then it is not one that I know. Instead, I shall explain why I am doing this.

I have a bad habit that my friends are kind enough to tolerate. That bad habits stems from who I am. I find a great many thing interesting. I like to share ideas. Unfortunately, in my case, this means that I tend to respond to questions which lectures. One of my friends has extended this tolerance to a degree where he feels that my rambling discourses should be accessible to other people in the form of a blog. Indeed, he has gone out of his way to ensure that I do this. Accordingly, I have created this blog.

Perhaps I should say a little about who I am as an introduction - doubtless I will fill in a profile but some context may be helpful. I am 40 years old, male and currently live alone though I am part of a long established long distance relationship. I live in rural Berkshire in the south east of England. I have a scatter gun approach to interests and switch between them in an apparently random fashion. Among things that I might talk about are:

- Science
- Computers
- Poetry
- Fiction
- Music, especially Blues and Filk
- History, especially British history, especially the history of Roman Britain
- almost anything else that takes my fancy outside of the core areas.

I am not a known expert on any of these subjects with the exception of a particular area of computing - I work for a large US IT company. I will leave it to your judgment as to how well versed in the other subject I am. All that I can promise is that discourses will be varied. I can not promise that they will be interesting because I have no idea what your interests are.

I don't propose to discuss my personal life very much because it is likely to be dull to anyone who doesn't know me and uncomfortable for those that do.

If you have any questions and feel that I might have an answer then please feel free to post them as replies. I can't undertake research campaigns for you but I will answer if I have any knowledge of the subject or any views.

I thank you for your interest on the safe assumption that you will not have read my thanks if you didn't read the rest.

Blues