I've happen to have stumbled across Reed Ghazala's book on circuit bending in my little local library's new book section
recently while in the throes of practicing making VST plug-ins inspired by an offer from the redoubtable maestro of plug-in
would to use in my own music - though not always for the reasons I initially suspected they would.  I sit strangely in the
middle of the world of effects-using musicians (I am a hopeless addict, always have been) and the world of the black art of
digital signal processing.  I know how all of the typical effects work, am aware of the mathematical principals underlying
them, and even sit, rapt, reading the manuals of completely new and totally, giddily genius new kinds of effects like
DtBlkFx, Lost Technology, ariesverb (if you'd like to feel less smart than you do right now, ariesverb v0.4's manual is a
must-read).

Oddly, almost instinctively, I've become a sound designer.  The average synth-user thinks of this as someone who makes
the presets they scroll through on their keyboards.  I've belatedly come to realize that I've always been a sound designer,
obsessed with effect processors, playing my the institutional knowledge and expensive gear that until very recently few had
access to.  I never really inquired as to what the knobs I twiddled on the few stomp boxes and cheapo multi-fx units I've
own did, as my ears told me what I needed to know; that and a hours of happy experimentation.  I read the labels painted
or embossed on the hardware units, but I never had the manuals and the manuals for gear I had seen were written in
inscrutable technical hieroglyphics.  The kinds of words not in the dictionary - and this was before the internet.  (Open
Office has just informed me that it doesn't have the word "internet" in it's spell-checker (?!) unless capitalized, and that
"spell-checker" is not, as a dvandva, either but is acceptable hyphenated - an apt demonstration the divorce between
technical jargon and everyday language).

I spent years "scoring" music to have impossible fx-enabled part transitions in multichannel audio before the words
"automation" and "surround sound" would roll off the tongues of musicians from the first hour I read about the internet, 3D
binaural sound enabled by laser-scanned measurements of an individual's pinnae with special headphones, and VR
gloves (an obvious must for anyone meaning to mix in 3D, and also instantly obvious was the fact that this would be an
incredible improvement on the expression pedal (which I thought of as a wah-wah for other fx... in fact I actually phoned
ART and Digitech asking them to start making a 2-expression pedal floor unit again as I'd read that there once was one -
somehow the fact that expression pedals came separately and could be plugged into some of their devices never came up
in my oddly innocent and detailed conversations with their representatives!) as a VR glove potentially had 3 axes of
movement and 5 fingers for controllers, too.  The one I own now, the P5 dataglove, in fact has 11 dimensions of control,
brilliantly adding pitch, yaw and whatever the other turning one is called by sailors and aviators.  It cost me $40, shipped
from a kind internet acquaintance in Canada who discovered he didn't need 2, and that there's no lefty gloves anyways.  
$40 for something I distantly hoped would be consumer tech sometime in my lifetime, and that MRI technicians and other
users of tomography still pay one hundred times as much for for some reason.
So of course I immediately took to scoring parameter automation for plug-in effects and synths - I'd already been writing
music like this for years just in case a touring alien band's UFO crashed nearby.

And, not knowing what in hell a flanger actually did, not knowing the jargon "time domain process" nor what "phase"
meant, much of my sound design was a combination of experimenting and then imagining.  I never did find a way to
move those knobs while my other limbs were otherwise occupied by an instrument, nor did I have buy a Tascam
Portastudio cassette 4-track recorder which is not exactly the sort of thing one does post-production processing with,
though I was vaguely aware that people could do such things.  So a large component of my self-training was aleatory -
black boxes with unknown sound-shaping voodoo within which most people seemed curiously reluctant to explore, and
even less apt to find the magical "settings" as I thought of them where the rare but brilliant confluence of parameters
made the magic sounds I sought.

So certainly it should come as no surprise to me that when making effects on my own, using SynthEdit, with armloads of
3rd party modules with scanty documentation which seemingly is perfectly adequate to those who somehow (I literally
have no idea where these mad scientists picked up their esoteric arts) know DSP theory intimately, that I would apply this
same methodology to achieving the sounds I'm after.  Because it is the sounds that I search for, not necessarily the
perfectly thought-through demonstration of my grasp of the underlying principals like some college student trying to please
their professor.

Which brings me to a strange paradox - these people who make the effects and synths I love seem oddly constrained by
certain horizons that I only hazily perceive.  It is astonishingly easy to make a variation of an effect which no one has tried.
 And yet 99% of all the plug-ins are attempts at perfecting some niche of some processor type, say dub delays or RMS
compressors, which implies to me that they think the tools will work for the user in some maximal way.  This, upon a
moment's consideration, is patently untrue.  No matter how good one's tool is, and I am by no means dismissing the need
for quality audio tools, the fact is that there's an astonishing variety of audio material for that tool to encounter out in the
wild.  And the nature of the audio matters a hell of a lot more than the tool that will be processing it in most cases.  That, I
suspect, is where the weird fetish for re-creating niche processors comes from - for niche genres.

Which is all well and good for those cats, but I'm not exactly the type that's going to be re-hashing Buddy Holly or Shaba
Ranks in the near future.  I will, however, plunder every single aesthetic idea I like that I encounter and mix them all up
into a synergistic jumbo.  Genre cross-pollination is even more of a black art, but the music I came up on steeped me in it
so I've no worries there.  Suffice it to say there is as yet no textbook for that.  The complexity of the aesthetic decisions that
go into making many sounds all hang together as though they were always meant to do so is something that can only be
taught by excessive, close listening.

And here lies the crux of the strange silence from the quarters of creative new effects shenanigans - they require new
genres or sound designers to give them a use.  Whole electronic music genres are based on the layperson's unfamiliarity
with a new sound, a new effect - and the person to put it to a fitting use.  Or, like many others who are like me, happened
upon an effect they'd been hoping for all along - and then put it to a use that the creator never intended nor suspected.
The creators of the tools that people like me use are actually, and not surprisingly if you've read this far, not especially
good at designing sounds with their own tools that they know most intimately.  They wear their pointy mad scientist cap, we
wear our crazed artist hat, and we complement one another.  Studying a technical manual on the proper resonance to use
on allpass filters to create a ringing, metallic comb filter isn't exactly the way to learn creative effects manipulation.  
They're not mutually exclusive, exactly, but it's a rare individual in which both skill sets overlap.

Certainly I myself am way further into the mad scientist camp than most of my instrument-playing brethren.  I resisted
reading the Master Acoustician's Handbook and knitted my brow at the mind-bending diagrams of flutter echo and phase
cancellation, mystified that there was a way to navigate the literally infinite complexity of indivisible frequency bands'
interactions, and re-interactions upon multiple echoes, though in the end I ended up learning a lot I've later applied,
though nothing like what a mastering engineer would glean from it.  I don't know what variable sigma nor delta implies in
the Wikipedia entries for Fast Fourier Transformations or Feedback Delay Networks.  I do know that 30 to 35 ms is the barrier
between which our physiological hearing apparatus differentiates an "echo" from a "thickening" of a sound like
multi-tracking a vocal part or a chorus ensemble effect.

Knowing all the science behind DSP will allow you to make a precisely-calibrated and high-quality effect or synthesizer,
but it can never allow you to infer from the underlying play of numbers what it'll sound like.  Some engineering problems
are like that.  You can build a bridge on a page that'll be the same when the construction workers are done with it, but you
can't use materials science to create a new lightweight but nearly-unbreakable building material (yet).  Nor can you
exploit the interesting little aberrations in the emulations of modular software that I so love, the little unintended glitches
and weird flourishes of the never-intended.

And, notably, sound designers like me more often than not end up pointing the way for the more recalcitrant and
conservative musicians on the far end of the spectrum who only adopt the new sounds after the weird and the way-too-high
have habituated the audiences to shocking new aural explosions.  Don't expect me to be knocking out next year's next
string of hit songs.  Being that genres which don't protect their turf tend to get bastardized at an ever-increasing rate
nowadays I've got some sure-fire ways to ensure nothing I'm involved in ever ends up in a car commercial.

All of which boils down to me not being a very good at chasing down bugs in my jury-built software kludges.  I don't care if
certain parameter settings cause the whole thing to stubborns go silent until I restart my DAW.  I move the controls,
experimenting with a fair degree of informed wandering, until the things I want happen, then tweak and tweak until it's just
right.  I manage to bog-down and mess up pretty much every piece of software I play with no matter how well-built, as I use
it in ways very much not intended by the programmers of the plug-ins nor the DAWs hosting them, so it's part of my
everyday music-making experience.  Learn what crashes the software, then only cautiously approach that limit and
memorize it...  and often wistfully wish that the damned thing would let you take it just a couple dozen degrees further...  I
even get weirdness to happen from trance gates and arpeggiators, the lamest of the lame in effects, nearly impossible to
use in any context creatively.

So you can imagine I was heartened by Mr. Ghazala's churlish essays in mucking up sound-making toys.  My half-informed
experiments within the SynthEdit visual software programming environment are not so very different.  If I don't know what
something does, and the volt meter doesn't seem to be very informative, I hook it up to some other part that I know does
make a sound in a way I suspect will make a good noise (a step-sequencer modulated by an LFO which combined
modulates another LFO's frequency which is being phase modulated by the incoming audio all of which is modulating the
pitch of an oscillator which ring modulates the incoming audio, for instance, should give a ring modulated sound with a
rhythm from the step sequencer which is stuttering up in bunches and unfolding into longer, slower patches between
syncopated tuples, burbling away at the ring modulator...) save it as a VST, then go apply it to some bass, some drums,
some piano just to see what happens, then twiddle the knobs.  Very often this results in one of the half-assed plug-ins I link
on my site.  And when I discover that certain controls simply do nothing, or nothing like what I intended, it's pretty hard for
me to keep the magic sound that I now cherish it for but with more straightforward controls that are easier to logically apply
- the weirdness is hidden somewhere in the commingling of informed DSP design, random guesswork wired together  and
just plain mistakes!  And when, applying my ever-increasing knowledge of what one is supposed to do with the modules in
SynthEdit, all too often the magic sounds disappear.  Which kind of person are you - would you scrap it because you can't
fully understand it or keep it, deranged and half-working, for the weird sounds that it makes?

The strange thing is that it's almost impossible for me to share my sound design tricks I pull off with heavily automated
chains of multiple effects - there's no rules of thumb or anything like that, it's heavily dependent upon the source material,
even the tempo or other types of time domain processes going on in a mix, even the stereo width! as the scoring of the
automation is as important as the specific plug-ins being used which don't make anything like the same sounds when their
controls are holding still or in a different sequence or even volume level in the chain, but my VSTs are like crystallized
fragments of these unlikely corners of audio processing condensed down into one function-specific sound module that
aren't achievable by other means.  And knowing that these misbehaving audio toys will find their way to others' bedrooms
who have been awaiting access to alien sounds like I was since my mid teens entirely justifies every effort I can put into my
amateur projects.
Word-Feat Ear-Tweak
homesite of runagate