Absolutely adorable.
Sunday, 5 February 2006
Off Topic, but Amazing
Absolutely adorable.
Saturday, 4 February 2006
Music, Noise, Acoustics, Electronics, and the Future, Epilog
This is God-only-knows how far into the future, because the technology to make it happen perfectly every time with zero intrusiveness upon performers of traditional acoustic instruments would have to be so advanced as to be "virtually indistinguishable from magic" to us, as Sagan used to say.
To give an idea of how difficult a job this would be, let's consider the interface(s) used to connect a guitar to a computer. The technology for interfacing a guitar to a computer or synthesizer is, with a few exceptions, the same today as it was when Roland and ARP came up with the idea: A pickup with six individual magnetic sensors - one for each string - is used to sense the pitches. This is an incredibly poor idea because a guitar string - especially a steel one - has a large, unstable, and highly variable amount of harmonic content to it. As a result, filtering programs have to be employed to sort through this over-abundance of information and determine what the fundamental is. Add to this that guitarists use all manner of expressive nuances such as vibrato and pitch-bend, and the difficulty of accurate interpretation simply compounds. This gordian knot was so difficult to untie - and impossible to cut through - that one manufacturer of synthesizers, ARP Instruments, was actually put out of business by all of the problems encountered with it's Avatar guitar synthesizer.
Since I was a pioneer in this area, I was around when all of this happened, and I can testify to the fact that early guitar synths were impossibly problematic: The interfaces were so easily "confused" that the vast majority of guitarists could not use them at all. One had to modify one's technique to minimize mis-cues by playing cleanly and deliberately; you couldn't just pick up the axe and flail away and expect it to work. When a note on the guitar decays, sometimes upper harmonics actually become more prominent than the fundamental, so the units would often begin to track those (I called it the "Arabian Melody Syndrome" because the resultant "melodies" reminded me of something a snake charmer might play).
This was circa 1980, and here we are over a quarter of a century later (!) and the Roland hexaphonic pickup is almost exactly the same unit it was when I was a Synclavier Guitarist back in the mid to late 80's. Guitar synths have become more playable mostly through a combination of designing "dead" guitars (Guitars which generate less than stellar harmonic content) and improved pitch discrimination algorhythms.
Of course, a hexaphonic magnetic pickup is a non-starter for a flamenco or classical guitarist who plays on nylon strings, so Richard McLish of RMC Pickup (No Wiki entries) designed the Polydrive, which my Godin Multiac Grand Concert Synth-Access electric nylon string guitar uses. BUT, this is just another hexaphonic pickup that uses contact transducers to sense the string vibrations: Same concept as the Roland design, but executed with transducers. I actually got it to create hexaphonic distortion - which sounds clear even with complex harmonies - and not to control a synthesizer (Though nylon strings and contact transducers make tracking slightly less problematic when compared to steel strings and electro-magnetic pickup systems). Basically, I'm not going to return to guitar/synthesizer combinations until the technology is advanced enough that the interface is "invisible" and every nuance of my playing is accurately translated. That may be never for me, as a forty-eight year old, with the current pace of "progress."
Over the decades I've seen lasers used in an attempt to obtain an instantaneous and accurate signal for interfacing instruments to computers - and that may be one of the proper paths when the tech becomes ubiquitous enough - but I have changed radically from the optimist I used to be about these possible advances into the... ah... "less than sanguine" agitator I am today.
To be truly transparent and totally invisible, the technology is going to have to "devolve" to using a simple, high quality microphone (Or any pickup), of any standard design type, and the computer is going to have to be powerful and accurate enough to translate that signal in to whatever polyphony it receives and translate every nuance virtually instantly.
Perfectly. Every time. No exceptions.
The idea just seems elementally simple to me, but of course it isn't.
Total. Transparency.
Music, Noise, Acoustics, Electronics, and the Future IV
The logistical and technical barriers where what I have, for over a decade, imagined the sonic arts could venture are starting to disappear. If we take a look at the possibilities as they have existed and as they have been haltingly explored in the past when trying to combine electronic music and noise-art with acoustic music and noise-art, we can see both the problems involved and the solutions required. I am not going to address the situation in which all acoustic and electronic instruments are under individual and direct human control, as this is simply an obvious case which differs little from any traditional performance arrangements. What I want to address is the situation - let's use a symphony orchestra with choir for example - where all of the electronic elements are composed and reside within a computer wherever possible. In this "ultimate solution" therefore, all of the electronic elements will be pre-existing and pre-recorded. I'm doing this to streamline the explanation, so just keep in mind that human performers on electronic instruments can be added to the folloing scenarios in whatever ratio is desired, though logistical problems will compound when this is attempted, as you will see.
When attempting to integrate live performers with electronic and computer instruments, the following possibilities present themselves when the above restrictions are observed:
1] Live performers with a static pre-recorded version of the electronic elements as accompaniment.
Obviously, this was the first reality to appear, and performing to pre-recorded analog tape tracks was ubiquitous during both studio sessions and live performances where there was no other possibility (Or, the performers were so lame that they had to cheat) from the moment such a thing became possible. In reality, it does not matter if the electronic elements are recorded on analog tape, digital tape, digital magnetic disk, or digital optical disk: It's all the same concept. I used to subscribe to the notion that there was an exception for sequencers, such as the Synclavier's Digital Memory Recorder or MIDI sequencer devices - the idea being that such a situation qualified as a live performance - but I have since shed that notion: It's ultimately a static pre-recording nonetheless. However, it was computer generated sequencing technology that would eventually allow for this prison to be broken out of.
The problem here is that the performers are slaves to the recording: While dynamic expressiveness is possible for the performers, temporal expressiveness is out of the question. The recording is, of course, obsinately static and sounds exactly the same every time. To call this situation less than ideal is an understatement for art music, but computer-accurate percussion tracks are actually an aid for pop/rock and especially dance music a significant portion of the time.
2] Live performers with a dynamic pre-recorded version of the electronic elements as accompaniment.
This can only be realized with computer-driven sequencing technology that has some sort of expressive interface. The simplest version of this must have temporal and dynamic variability, and that can be realized with something as simple as a human "conductor" for the computer sequencer who has tempo and dynamic control. This has been possible for several years using simple MIDI expression devices like foot controlers: One pedal for the tempo and one for the volume would cover the basics, and the conductor could still slash the air with a stick to cue the performers (Though I wonder how many traditional conductors could manage to chew gum and conduct simultaneously, much less how many (few) would be willing to diminish their egos enough to learn how to do this effectively).
This particular situation actually has far more dynamic implications than it might appear to at first blush, because with a computer sequencer and synthesized and sampled sounds dynamics don't have to apply only to volume or tempo: The dynamic input (Whether it is primarily a tempo controller or primarily a volume controller) can be mapped to filter envelopes, FM envelopes, LFO's, stereo pan, or anything else the composer can imagine that the software will allow.
When these extended possibilities are well thought out (composed), then each performance could be radically different with only slight variations of the expression inputs!
Here, the electronic musical and noise elements began to come into their own, and they become a factor that adds significantly to the resulting effect of the overall performance. Not only would the performers be reacting to each other, the conductor, and the audience, but they would also be relating, interconnecting, and reacting to the electronic elements to boot.
Here's the killer: IF ALL OF THE ELECTRONIC MUSICAL AND NOISE-ART ELEMENTS ARE PRE-RECORDED AND RESIDE WITHIN A COMPUTER, THEY CAN BE BROUGHT TO ANY EXISTING SYMPHONY ORCHESTRA FOR A PERFORMANCE!
This kind of a situation would allow an infinite number of different composers on an infinite variety of computer systems to compose in the sonic arts using whatever variety of electronic and acoustic elements they wish to combine to go on tour with their works and reach any public audience who have access to an orchestra and a hall. This is the kind of symbiosis that is possible right now between the acoustic and electronic worlds.
Obviously, there are logistical problems with this kind of an idea, the main ones of which relate to the computer system and the sound system it requires. Such things need to be toted around, but the computer and sound system required to integrate with a relatively quiet symphony orchestra - even when the added choral resources I mentioned previously are present - would be absolutely miniscule compared to the gargantuan sound systems that even moderately successful pop music acts tour with. I could carry around such a computer and sound system in the covered bed of my pickup truck. Rehersals for such novel presentations would be more intense and would require the added dimension of a sound check, but any talented live sound engineer could handle the technical issues involved there with ease.
But there are still more possibilities for effective interconnection between the human performers and the computerized elements.
3] Live performers with a dynamic, cue-based pre-recorded version of the electronic elements as accompaniment.
Here, the (near) future is the subject, so the possibilities become infinite and limited only by the imagination.
First, let's eliminate the conductor's added burden and allow him to return to his usual histrionics: We'll let the performers determine the cues for the computer to follow (Though a condustor's baton could certainly be harnessed as a controller device if the interfacing element was wireless and tiny enough). Starting with the idea that all, or some, or one of each of the instruments in the orchestra can have a controlling input to the computer (Such devices would have to be small and unobtrusive enough not to irritate the performers, and they would also have to be harmless to the sometimes wildly valuable instruments, and wirelessness would certainly be a plus, though not absolutely required), and considering the miriad effects these inputs could be mapped to and control within the dynamic computer sequence (Pitch, volume, tempo, envelopes, pans &c.), one can see that this would offer truly limitless possibilities for symbiosis between the performers of the acoustic elements of the music and noise-art and the electronic elements of the music and noise-art. Since the possibilities are endless and in the realm of conjecture, I need spend no more time describing them. If your head is not spinning wildly with possibilities at this point, you are a dullard with no imagination anyway.
As is always the case with these sorts of advances, what it is going to take to break down the barriers that are preventing this from happening are not only technical and/or logistical. A personality is going to have to emerge with the imagination, talent, resources, and force of will required to overcome the inertia of conventional thought and institutional resistance.
Leaving a little to the imagination instead of spelling it all out is... er... cool.
Friday, 3 February 2006
Music, Noise, Acoustics, Electronics, and the Future III
*****
Obviously, not every composer went the way of the atonalist noise-art pioneers. Notable among the traditionalists - at least in this regard - were Aaron Copland (Arguably the greatest American composer of the twentieth century), Igor Stravinsky, Leonard Bernstein, and George Gershwin. This is by no means a complete list - in fact, it's just the composers who are Americans or became Americans (Hey, I'm an American) - but if you think about this list for a minute, you will notice something significant: All of these composers used elements of folk music or jazz in at least some of their works (And, what is it about Russian Jews? All of those men are of Russian Jewish extraction).
There are only two kinds of indigenous American music if you boil it down: Scots-Irish folk music and the African-American blues. The blues basically evolved along different paths - not always completely separate - into the various jazz and rock idioms. When the blues cross-pollenated with Scotts-Irish folk music, C&W was born. Over time, the blues/jazz tradition became very rich and highly varied, and it borrowed elements from almost everywhere.
Gershwin was doubtless the most fascile at blending traditional elements with jazz, and had he lived longer, I believe he would have stood head-and-shoulders above any American composer who has yet appeared for this very reason. Interestingly, Gershwin used some noise-art effects in some of his pieces (As did Beethoven in at least one famous instance over a century earlier). Stravinsky went through a sort of noise-art stage, but he never let the serial technique totally rule over him or his music: There was always something particularly Stravinsky-like in everything he wrote (Obviously, I think this is a desireable attribute). Bernstein was a towering giant of an intellect (and ego), and there is a fabulous video interview of him explaining why music will always be governed by the overtone series that I never tire of seeing: I obviously agree to the point where I no longer even entertain any other possibility.
If we fast forward to the last couple of decades of the twentieth century, it was really the upper echelon popular music personalities who were doing the most musically significant experiments (Or, at least the ones I found most interesting). People like Sting and Paul Simon brought into popular music elements of world music and jazz to the point where categorizations became meaningless; indeed impossible. Obviously, with the constant presence of synthesizers and samplers, much - if not all - of this new pan-world music was liberally seasoned with sprinklings of noise-art.
*****
With all of these miriad possibilities available to the creative sound-artist, in both music and noise, and both acoustic and electronic idioms, you might expect that effective amalgams would have appeared which would have coalesced into compelling stylistic expressiveness. This has not, as of yet, really been the case. Sure, there are a few composers like Penderecki who have evolved into a stylistically nice place, but he is very much a traditionalist from my point of view inso far as he has remained within an acoustic realm that is essentially the same as it was a century ago.
What has yet to appear is a stylistic amalgam that has traditional music and it's tangental noise-art combined with elements of pan-world music and electronics. Part of this is the result of the nascent nature of some of the technology required to make this happen in an organic and artistic way, but there is enough technology available now that there must be another finger in the dyke. There is, and it's factionalism.
Much of the current situation in the sound-related arts has nothing whatsoever to do with the imaginary construct of "high art" versus "low art" and everything to do with ignorance and laziness that results in shallow and meaningless dilettantism posing as something it isn't: Art.
In order to make a significant statement in the sound-arts today, a lot of sweat-equity must be invested. Most students are not only unwilling to do this, but most teachers don't enable them or even encourage them to do so (Primarily because the teachers are ignoramouses as well: The blind leading the blind).
*****
Without continuing on this rant-line (I'd love to, but you get the idea), I will simply say that the solution is obvious: Autodidacticism. At no other time in human history has so much information been available to so many and for so little cost: There is nothing standing between the student and knowledge except for conventionalism and it's attendant propaganda, and there is no excuse for failure asside from laziness and... lack of talent.
I would like to see, withing my lifetime, the appearance of a new amalgam of all of the sonic arts. I guess if you want something done right, you have to do it yourself. Crud.
It's fine just as it is.
Music, Noise, Acoustics, Electronics, and the Future II
In a discussion with some friends on these subjects, the idea came up that composers in the early part of the last century felt that they had "hit the wall" with music (With the definition I use, the term "tonal music" is from The Department of Redundancy Department). The decision may not have been to consciously go outside of music per se - in fact all of the evidence I'm aware of all points to the conclusion that Schoenberg and his disciples considered their work to be music - but the end result was that music was in essence abandoned, and noise was embraced, when you go by my definitions. And I believe my definitions are, well, definitive. ;^) Call it accidental, nieve, or whatever, that was just how it went down.
Looking for sonically expressive or evocative sounds outside of music is far from a bad thing in and of itself, but to call it music creates some intuitive cognitive dissonance among audience members who instinctively understand what music is and isn't, and who are expecting to hear music. Another part of the problem was that, instead of simply adding noise to music as an additional expressive resource, music was entirely eschewed and replaced by noise. Additionally, some of these early attempts at The Art of Noise were deliberately and agressively dissonant to the point of striking many listeners as esthetically ugly. Some were not, however: I recently listened to some of Webern's orchestral noise-art that was posted at another blog - the first time in over fifteen years I had attempted such a thing - and I found some of it to be quite charming, and the sound effects (And that is really what they are) achieved with the orchestra were quite compellingly sophisticated. Pretty, even. It helped a lot that the pieces were startlingly brief, though.
The medium of conveyance for these early noise-art attempts was also from a world that had always been the exclusive domain of music: The traditional orchestra and it's various instruments in smaller derivative combinations, plus voice. This also caused some cognitive dissonance among audience members, but also among the performers, many of whom found these new noise-art experiments dissapointing and unfulfilling from a performer's point of view: No emotional paycheck similar to what music gives was forthcoming for them. Admittedly, there were performers who took to the new genera, but I don't care how you slice it, dice it, or cut it into julienne fries, they were in a distinct minority outside of the norms.
Of course, these early noise experiments were also written in standard notation, though that broadened over time to include more graphic indicators than there are characters in the Chinese/Japanese Kanji writing system. This trend was indicative of one of the underlying problems with acoustic instrument-based noise-art: Some of the best noise will forever lie outside of the human ability to produce sound with traditional acoustic instruments, as well as outside of the ability of traditional notation - no matter how much it is expanded - to describe.
I take the view that something good comes out of everything, and I believe the noise-art that was an outgrowth of music has certainly added many fine effects to film scores. However, those effects practically vanish into meaninglessness without the associated video, so it comes as no surprise to me that noise-art using traditional instruments never caught on as a stand-alone replacement for music.
But there is more to it, of course. Many fine sound effects that came of the orchestral noise-art experiments have been appropritaed by composers who use them intermingled with music to produce an expanded art form that combines both music and noise. I happen to absolutely adore (Let me grab the CD, because I can never spell his name right) Krzysztof Penderecki's St Luke Passion for this very reason: Music, noise, song, and spoken word are combined into a highly effective and highly charged art form that is what I think the original noise-art pioneers should have had as a goal. History is history, of course, but I think this amalgam of music and noise-art is where the future of purely orchestral and orchestral/choral writing lies (Have you seen that brilliant Honda commercial with all of the noise-art for the soundtrack produced by a choir? God, that's cool!). All it would take would be for a person with the talent and disposition of a Beethoven to come along and a whole new universe of expressive "tone poetry" would open up. That seems like an inevitability to me.
*****
While this noise-art which was an outgrowth of music was developing, there was a parallel and initially unrelated series of experiments going on in the field of electronics: The synthesis of sound. Some of the groundwork for this was laid as far back as Brahm's lifetime by Hermann von Helmholtz, but I don't want to get too bogged down with that tangent. It is important to note that the first fully electronic instrument to appear was the Theramin, created by Leon Theremin in 1919. Though it seemed to be merely a novelty at the time, visionaries like Joseph Schillinger saw the potential of the electronic medium, albeit a bit dimly. Schillinger did, however, forsee our current situation in which we, as composers, are no longer slaves to performers, and that we can now experience all manner of music and noise-art which would be impossible for acoustic instruments to produce, or for human performers to execute.
As far as the actual effective synthesis of sound, however, it took until 1964 for Bob Moog to release his Moog Synthesizer to the world. This ponderous early analog synthesizer was very tedious to work with, as patch chords (hence the term "patches" for sound programs, still used in some quarters today) were required to route oscillators to envelopes and LFO's and noise generators and filters &c. However, entirely novel and previously unheard of classes of sounds were created by it, and so the first purely electronic noise-art began to appear.
Unlike the situation with the noise-art which was an outgrowth of music, the electronic noise-art pioneers were of a more populist nature. Moog's synths rapidly appeared in everything from TV and radio adds, to B-movie soundtracks, to rock and roll music, to artsy-fartsy stuff (Though not always with the approval of the establishment). As a result, electronic noise-art became a part of popular culture and connected with the public in a way that, nearly half-a-century later, the noise-art which was an outgrowth of music never has managed to do.
Again, there is more to it, of course. Though these early sounds may strike our ears today as beeps and squawks - and lacking in sophistication - they were a sensation when they were first heard. Beyond that, there was no preconceived notion among audiences about what to expect from electronic instruments, as there was with acoustic instruments and voices. This compound novelty of sonic timbre and instrument made the electronic medium naturally more suited to the pure Art of Noise creations: The electronic medium started off with advantages over the acoustic medium, despite the primitive nature of early electronic instruments.
The limitations of subtractive analog synthesis were many, and entire classes of sounds were out of the reach of it. Around 1970, John Chowning discovered a new synthesis technique called Frequency Modulation, or FM. Believe it or not, the first FM synth, contrary to the belief of many, was an analog synthesizer, and not a digital one. Though the Yamaha GS1 was ponderous and expensive, much like the original Moog model, by 1983 the DX7 appeared, and it changed, literally, everything (Just as the Mini Moog had earlier). Cheap enough for a reasonably successful gigging musician to own, it was capable of a wild variety of bell-like and glassy sounds that no analog synth could ever match.
While that populist revolution in digital noise-art was going on, an instrument was created at Dartmouth College called the Synclavier. The Synclavier actually beat Yamaha to the digital version of the FM synthesizer, but ironically had to license the FM tech from Yamaha to access Chowning's work. What was so special about the Synclavier though, was that it not only used a combination of additive and FM synthesis (The Yamaha DX series used an arcane FM-only algorhythm setup which was far less intuitive to work with), but it also married the digital synthesizer to a computer. So, not only could you create numerous classes of sounds, but you could also create musical or noise-art melodies or pitch-strings which would be impossible for a human performer to execute: Music notation was no longer a limiting factor.
With the advent of the Musical Instrument Digital Interface standard, or MIDI in 1983-84, Joe six-pack was given the ability to connect a computer to his synthesizer: Music and noise-art for the masses.
As an aside, I bought a Synclavier in 1983 and owned it right up until last year. I was working in the music industry during those years (I worked at E.U. Wurlitzer in Boston and at Manny's Music in Manhattan from 1984 to 1986 as a side job to my gigging (Or the other way around, depending)), and they were heady times. Serious glory days.
I programmed many sound effects on the Synclavier, and many of those timbre programs were distributed by New England Digital with the Synclavier. Since I was a pioneer of electronic noise-art, samples of which you can download as MP3's here, if you are so inclined, I ran into some problems with that medium which are "the same difference" as those experienced in the area of musically-derived noise-art.
While synthesizing timbre programs was very exciting for me - some days I would spend over eight hours on that task - and creating noise-art or music with them was also very exciting from a compositional standpoint, there was no excitement, or tension, or drama, or symbiosis... there was nothing much interesting about sharing the results with an audience. There remained a void inside of me where the performer used to be that could not be filled by pressing "play" on the Synclavier's Digital Memory Recorder, and then sitting back with the audience to listen. While there is no doubt but that this electronic noise-art was more interesting for the audiences than the acoustically-derived forms of noise art generally are - I got some awesome compliments from people who I happen to know absolutely detest "atonal music" - there was still something missing from this medium, and that was the human-to-human communicative element that only live performances can convey.
*****
There was a third path that noise-art followed, and it began with recorded sound. This path actually originates earlier than any of the others, as Thomas Edison invented the phonograph in 1877 (There is a great recording of Edison and Brahms (!) which I have heard, and every musician who loves Brahms should hear it: Brahms' voice sounds exactly as I would expect after hearing his music. It's quite fascinating).
Nearly forty years later, after many innovations which included wire recorders and steel tape recorders, magnetic tape recording appeared in Germany in 1935. In the post-war years this technology developed, became less expensive (cheap, actually), and filtered into everyday life. It was during this time that artists started experimenting with recorded sounds to create noise-art. By the middle of the 60's, artists like Steve Reich were producing pure recorded noise-art like It's Gonna Rain and Come Out.
Just as synthesis went through an analog stage, so did recording. When the digital age dawned for recording, the results were disasterous for synthesis. It was a case where the lowest common denominator ruined everything for the illuminati, as far as I'm concerned. I'm referring to digital samplers, of course.
Back in my snottier-than-thou Synclavier programmer days, I used to berate programmers who tried to replicate acoustic instruments or natural sounds. The point of synthesis for me was that I could create things that had never been heard before. Alas, I was in the (intelligent) minority: Most were after the Holy Grail of "the perfect string pad" or some such nonesense.
The main competitor for the Syncalvier was the Fairlight, which was a sampler with a computer (And which sucked compared to the Synclavier for just that reason), but when NED came out with Poly Sampling, it was over: No sysnthesis abilities were ever enabled on the 16-bit voice cards, and owners like myself stopped upgrading.
That was circa 1987, and synthesis has yet to make any more significant strides. But enough of this rant.
*****
So, today we have music of all kinds to draw from as a resource, noise-art which evolved from music and which uses acoustic instruments as a resource, synthesis-based timbral noise-art as a resource, and sample-based sound noise-art as a resource; the latter two of which can be combined with computers individually or together.
Next time, we'll explore the possibilities for today and tomorrow.
Of course: I forgot "feather-plucking-related noise-art."
Wednesday, 1 February 2006
Sometimes I Hate These, "Which (blank) Are You?" Deals
I'm a Porsche Boxster!
You're stylish, nimble, and good-looking. When it comes to having fun, there are few who can surpass you. And yet, you suffer from a lingering inferiority complex. Maybe it's because you have an older relative who is always in thelimelight?
Take the Which Sports Car Are You? quiz.
I'm not so sure about this result. I rather fancy myself as a 70's era Ferarri 512BB in Red over Black, but I did answer "no" to the "Are you high maintenance?" question. If I was as cute as a Boxter, I probably wouldn't be single. In fact, if I was that cute I'd probably be gay (Not that there's anything wrong with that).
But then, I did own a yellow 1974 Fiat X1/9 back in the mid-seventies, so perhaps that's not so far off the mark. The Boxter part I mean! LOL!
Like I said, red over black (I wouldn't mind being that car!).
Music, Noise, Acoustics, Electronics, and the Future
My little Apple Dictionary that resides in OS X defines music thusly:
1 the art or science of combining vocal or instrumental sounds (or both) to produce beauty of form, harmony, and expression of emotion.
Obviously, I was not consulted in the writing of this dictionary.
For this discussion, the following definition of music will apply:
The art of tone setting, in which implications of the harmonic overtone series are explored through the establishment of pitch hierarchies, the describing of melodic trajectories, the revealing of harmonic structures, the execution of contrapuntal motions, the propagation of rhythmic patterns; as well as any of the former elements in isolation, or a combination of any of the aforementioned elements together.
The Mini gives this as a definition of noise:
1 a sound, esp. one that is loud or unpleasant or that causes disturbance.
Again, I am available for consultation, and my fees are properly exorbitant.
For my purposes here, the following definition will apply for noise:
Any sounds which fall outside of the definition of music.
Examples:
Music= Gregorian Chant/Noise= The sound of fingernails across a chalkboard.
Music= A Mass by Perotinus Magnus/Noise= A dentist drilling into a molar.
Music= A Mass by Palestrina/Noise= A chiropractor adjusting a spine.
Music= A Fugue by Bach/Noise= Air escaping a balloon through a tightly stretched opening.
Music= A Symphony by Mozart/Noise= A jet engine at takeoff thrust or in afterburner mode.
Music= A Symphony by Beethoven/Noise= A diesel locomotive towing a hundred cars worth of freight.
Music= A blues by Louis Armstrong/Noise= An ambulance with a heart attack victim aboard.
Music= A drum solo by Gene Krupa/Noise= A firetruck on the way to a burning Mayors' house.
Music= A bass solo by Jaco Pastorius/Noise= A woman who has just seen a mouse or a snake in her bedroom.
Notable "gray areas":
Birdsong (Sometimes melodic, but not usually tonal or modal), a jackhammer or piledriver (Perfectly rhythmic, but not really musically so), a Jimi Hendrix or Adrian Belew guitar solo (Contains some pure music, some mixtures of music and noise, and some pure noise just for the sonic effect of it).
Noise often called music, or rather, mistaken for music:
Atonal "Music" is actually noise art with acoustic instruments: The implications of the overtone series are not brought into it if the situation is that one tone relates only to another and all tones share an equal standing; in fact, that's not a bad partial definition for noise right there.
Electronic "Music" sometimes is actually music (Switched-On Bach), but many times it is a mixture of music and noise art, or even pure noise art.
That should give you a pretty good grounding for my next post, which will be my thoughts on the implications of all of this.
That would be noise, despite the beauty of form displayed.