All posts by rustin

Using the Windows version of LTSpice on a Mac running OSX

I used to host a version of LTSpice made with Winebottler to allow access for those of us on Macs to run LTSpice without starting up a virtual machine (such as VMWare Fusion, etc). This actually gained some popularity and I got a surprising number of emails from people thanking me for it. I even found a link to it on a couple EE course syllabuses from around the world. Eventually Linear released a native version for OS X and found no reason for me to continue hosting my Winebottled version, considering the entire application was upwards of 200MB (as I included the whole Wine environment in the package so people wouldn’t have to install Wine to run it).

Since then, I’ve come to realize that the native mac version of LTSpice lacks a number of features not contained in the windows version and can be quite frustrating to use–especially for those new to LTSpice. I’m sure LT will eventually get around to improving those, but for the time being, I’ve decided to make a newer bottle of the Windows version of LTSpice. This one is tested to work on Yosemite, I’d love to hear from you if you’ve found it to work or not work on other releases of OSX.

So here’s the latest bottle I made. Keep reading below for a summary of the differences between this and the native mac version.

…I’ll update this soon with pictures, etc. But the biggest differences are a toolbar at the top (no need to use keyboard shortcuts) and some nice wizards for generating the code for different types of sources.

Is it true that “engineers design systems to manage pressure and flow?”

This is an answer to a question originally asked on You can find the original here:
Is it true that “engineers design systems to manage pressure and flow?”

An electrical engineer was explaining how electricity works in buildings to me, and I got the impression that all of engineering can be boiled down to managing pressure and flow in systems. This seems too simple to me, and I think there is more to it.  Can someone help me understand what I am missing?.

Answer by Rustin Bergren:

What you’ve stumbled upon is called the Hydraulic analogy. It’s an incredibly useful tool for teaching the very basic concepts of electricity in circuits. Interestingly, the ideas of a ‘pressure’ and a ‘flow’ can be extended to mechanics and thermal systems (and computer networks and vehicle traffic engineering and on and on).

Here’s a chart from (the wikipedia page: System equivalence) showing the relationship between parameters of different systems.

(note: I have a couple of disagreements with this chart; however, pointing them out is not really not germane to the answer)
It doesn’t work perfectly, however. Notice, there isn’t an equivalent term for inductance in thermal systems. From this, I think we can gather that Thermal systems do not oscillate (I’m pretty sure).
Also, the simple fluid analogy for electrical engineering becomes pretty much useless after a first semester circuits course. It’s great for understanding the overall qualitative constructs of the stuff at hand, but it doesn’t get you much farther than that. For that matter, this simplistic fluid analogy wouldn’t get you all that far into an understanding of fluid mechanics.

So what you really have here is a case of having a hammer causing you to see everything you come in contact with to look like a nail. Indeed, we can draw analogies between numerous types of systems, though many times we have to modify the hammer a bit to make it work for that particular type of nail.

So yes, you’re right. It’s way too simple if you’re trying to reduce all of engineering to a simple metaphor. So it’s not at all about ‘pressure’ and ‘flow’ if you want to boil engineering down to something. What it can all be boiled down to is the application of mathematics and physics. In the case of RF Engineering (electrical) and Aeronautics (mechanical), it has to do with applying a lot of Vector Calculus. In the case of Control Systems Engineering (electrical/mechanical) and Heat Transfer Engineering (mechanical) it’s more about solving Partial Differential Equations (PDEs).

But, if you’re trying to explain the basics of how power distribution in a building works to a non-engineer, making a water analog (something most people can easily picture in their heads) works extremely well.


“Must we learn command lines…?”

This is an answer to a question originally asked on You can find the original here:
Must we learn command lines (e.g. Make, Bash or -o)?

Must we learn command lines (e.g. Make, Bash or -o)?

Can’t we use a programming or scripting language to create scripts and/or macros? I’m having trouble with Bash commands and I’m lazy trying to figure out how to solve it and was wondering whether learning Bash is worth it.

Answer by Rustin Bergren:

No, you mustn’t. But keep in mind that many things are really just pretty interfaces that convert mouse clicks and field contents in to commands at a lower level that is transparent to you. If you want to understand something well and use it to its fullest potential it’s often best to understand its native language. That said, some things absolutely suck trying to speak to at this fundamental language–which is why Photoshop is way better for doing basic editing on a single image than ImageMagick (note, I say single image).

As far as make, there are plenty of more advanced build environments that totally obscure dependency and conditional compilation details away from you (hide by default really, you can do all and more with them when you dig down) and handle them in the background (somewhat). Xcode for example. But, heaven help you if everything doesn’t always coincidently work the way you expect it to. And if you’ve ever programmed anything, you know how often what you expected to happen doesn’t. Having an intimate knowledge of how things are working underneath makes debugging such things SO much easier. Otherwise when stuff doesn’t work as expected (and I see this all the time) people just takes stabs in the dark as to what the problem is, iterating different possible things, changing this option, that option, restarting, etc. When just a basic knowledge of how the underlying components work together would allow one to systematically check each part, test hypotheses, rule out others and ultimately verify that everything is working as suspected. The compilation tools in Xcode are great and by no means am I attacking them, but having an intimate understanding of what’s going on under the surface is invaluable. To that end, sometimes it’s easier to learn the basics of compilation and build with a much less sophisticated tool like Make.

If you’re interested in being a hacker or system administrator you should definitely learn bash. Especially if you are particularly lazy (like me). That’s right, learning these tools allows you to spend less time doing repetitive boring tasks–that’s the point.

There’s also other benefits to command line type systems, that you will find to like after some time using them.

  • It’s really easy to keep track of what you did to get something to where it is. Often this can be a single screen shot that not only shows the commands you typed, but the results of those commands. A great tool for this is the script command. According to the man-file

    The script utility makes a typescript of everything printed on your terminal…

  • It’s very easy to communicate how to do something in a command line to someone else. Have you ever read a tutorial on how to do something in a gui–scrolling through pictures of mouse clicks? You know what’s worse? Writing a tutorial on how to do something in a gui.
  • It’s easier to script stuff that is fundamentally command-line based. Scripting mouse movements and clicks at horribly tedious. Also, don’t get me started on icons. There’s really just no intuitive way to infer an script line like: “click on the little button that looks like an arrow trying to escape a box…or maybe its an umbrella on a square table…yeah click that and then a new box will magically appear, click the little picture in that box that looks like…”. GUIs sometimes have a scripting interface you can talk to directly (Applescript calls these Scriptable Applications), but these are NEVER any better documented than the gui software itself. And go figure, after tediously writing a manual on how to do stuff in the gui for their application, who wants to write documentation on the scriptable interface no one ever uses anyways. You ask about scripting, theres nothing easier to script than a command line application.
  • Configuration patterns and common tasks, even complicated ones can be very quickly applied. You can just cut and paste them from your “notebook”.

That said, some things absolutely suck when doing them from the command line. But, that’s the answer to a different question.

Matlab: How to use extrapolate from spline (or anything similar) curve to show whether new data fits that curve? (read detail section for…

Answer by Rustin Bergren:

I think what you’re trying to do is have MATLAB ‘give’ you the curve of the 2D spectrogram plot such that you can compare column evaluations with the proposed spline curve. Then you can say, “yes, the max value in this column is within xx of the spline; therefor, its part of the set.” Is that right?

The problem is, there are literally an infinite number of regressions that will match your data. If you supply the spline algorithm with data points that are part of a curve you’re not really interested in, then they will effect the overall curve and in many cases the new curve will now match those. So what you would need to do is remove those points you don’t want smoothed in to the calculated spline; however, it seems like what you’re trying to do is find out what those points are. In which case, we’re now chasing our own tail.

By example
Let’s take a curve like this:

x = 0:10;
y = 10.*x.^2+3.*x;

Now let’s modify some of the points on the end so it doesn’t match the curve perfectly.

y_data = y;
y_data(end) = y_data(end) - 888;
y_data(end-1) = y_data(end-1) - 400;

Without adjusting anything let’s spline based on all those points

yspl = spline(x,y_data,0:.1:10);

The red dashed line is the calculated spline. While our data points are from 0, 1, 2…10; I configured the resultant spline to interpolate 10 times that resolution so it’s defined at 0,.1,.2,.3,…10.

But hey! That’s not what we want. That spline just included the data points we were trying to ignore. So what can we do? Well, let’s exclude some of the points from the calculation.

I’m going to remove the first 3 and last 3 by using range selections but with essentially the same command. Notice, you can keep the same range parameter (the third term in the spline function call)–this will now extrapolate the calculated spline on to that range.

yspl = spline(x(4:end-4),y_data(4:end-4),0:.1:10);

So that calculates the spline using only these 4 points and it doesn’t look to bad for what we were trying to do.

The problem, however, is how do you know from your series which points to include and which to remove? It seems like your trying to do all this programmatically. Can you just assume that the max column values from 30% to 60% of your x range are valid points within your animal call? If so, then gravy–try doing this.

You can also try smoothing, which is essentially the same as the spline but it allows you to supply a smoothing parameter. Notice however that our bad values at the end, if included, still mess us up

yspl = csaps(x,y_data,.1,0:.1:10);

Not only does smoothing not cause the interpolated curve to not move too much away from our down sloping points at the end, but it also moves the spline curve away from the data points that match our curve.

I hope this answers your question. If you want to know how to extract the points from your FFT, and use the functions I listed, please mention so in the comments. So to review quick, the spline function is called like this:
spline(x_datapoints, y_values_at_x_datapoints, new_x_range)
The new_x_range allows you to both interpolate and extrapolate the calculated regression function.

To be honest, if your trying to isolate the audio sections, while a novel idea, I don’t think this is the best way to go about this. Thanks for the A2A!

Matlab: How to use extrapolate from spline (or anything similar) curve to show whether new data fits that curve? (read detail section for…

Deterministic Human Behavior: Trying to Find Free Will

note: this was a research paper I wrote for a undergrad English course.  It’s a bit funny reading it now; however, when I had it posted previously I received some emails from a couple people who discovered it accidentally on the web and found it interesting. I’ve done my best to carry over the formatting, though I’m sure there are a couple oversights. 

There is no greater question when contemplating the existence of the human soul, the basis for morality and ethics, and the notion of the individual and the autonomous self then that of the question of free will. While scientific pursuit illuminates the darkened crevices of the universes mysteries, often what it uncovers is yet another cavern whose walls expand wider and whose bottom also appears endless. Often scientific discovery enlightens a mysterious world that is actually stranger than our past ignorance could imagine and while specific knowledge is gained, almost as many new questions are uncovered as previous ones answered. The tool of science has given us much insight into the predictable (and unpredictable) nature of the universe and the wonderment of the human brain. Most certainly, what science has shown us is that we live in an orderly and predictable universe that conforms to certain laws. Science’s usefulness is uncovered in discovering and exploiting those laws to predict a future state of the universe that has yet to happen. In 1812, the eminent mathematician and scientist Pierre Simon Laplace wrote the following:

We ought then to consider the present state of the universe as the effect of its previous state and as the cause of that which is to follow. An intelligence that, at a given instant, could comprehend all the forces by which nature is animated and the respective situation of the beings that make it up, if moreover it were vast enough to submit these data to analysis, would encompass in the same formula the movements of the greatest bodies of the universe and those of the lightest atoms. (Laplace & Dale 2)

To understand what Laplace is saying, consider a billiard table with balls scattered across its surface. With a basic understanding of classical physics, one could predict in what way the configuration of the balls would change if someone were to strike one of the balls and set it in motion. Likewise, the same calculations could be made to determine the past configuration of the balls if one knew the exact details of its current state. Laplace extends this notion to include all matter in the universe. He paints a picture of a universe that follows a predeterminable structure that is only limited by the knowledge of all the laws at work and every particle’s initial position. This notion is known as Determinism and all of our observations and discoveries in science have preserved this idea with only a few exceptions1Those exceptions would be the discoveries made in the last 60 years in quantum mechanics. This paper will not cover quantum mechanics as it relates to free will as such quantum effects only exist in systems on an extremely small scale. Micheal Shermer summarizes University of Colorado physicist Victor Stenger in an article for Skeptic Magazine, citing that “Stenger computes that the mass of neural transmitter molecules and their speed across the distance of the synapse are about two orders of magnitude too large for quantum effects to be influential. There is no micro-macro connection.” That is to say, the physics involved in the electro-chemical operation of the human brain takes place at a scale above which quantum effects have been observed.. This quickly leads to the concept of free will. One must ask: are our actions and behavior similar to that of matter like Laplace describes? Could human behavior be predicted with the same accuracy as that of planets, of galaxies, of electrons and other particles? Is human behavior the product of sequenced neuronal firings following a specific predetermined set of laws, much like all other matter in the universe? If so, and our brains are just deterministically crunching data and representing outputs as our actions in the world, what can be said about our free will and the existence of choices? Do we make choices or does it merely feel as though we do; our actions being strictly consistent with our genetic, environmental, and evolutionary past? Furthermore, if our intrinsic and natural feelings about choice turn out to be merely an illusion, how should that affect how we consider the actions of others? What about histories most vile criminals and most honored saints, is the magnitude of their acts at all diminished if the laws of the universe posit that there was only one possible course of history?

The Editor of The Oxford Handbook of Freewill, Robert Kane, identifies the modern major divisions of philosophical thought concerning the questions I have presented. Kane notes that the debate is not just merely divided by those who argue for and against determinism; determinists are further divided by those who believe a deterministic universe is compatible with free will and those who do not (6). This paper will peripherally examine these viewpoints: deterministic (or indeterministic human behavior) otherwise known as freewill; and how it fits within the body of the current scientific literature2It should be noted that one could argue that the method of science is not equipped to answer these questions. Refuting such notions is beyond the scope of this paper and so, this paper will assume that the scientific method represents an appropriate method for discovering truth..

While it seems that we naturally feel we are able to influence our own destinies and make choices that will affect our future, one might wonder how we came about to question this ability. Kane cites many possible reasons for this; mainly pointing to an increased knowledge in human psychology: concepts regarding heredity vs. environment, social conditioning, and the influence of our unconscious brain (5). Kane also makes note of advances in physics and a greater understanding of the universe around us (7).

While it might seem simple to envision a brain as a highly advanced machine, given the ubiquity of the modern computer in our everyday lives, this revolutionary concept can be traced back to Julien Offroy de La Mettrie, who made it more than 250 years ago. His view that the distinguishing characteristics that separated the human brain from that of other machines was one of a quantitative matter not of a qualitative one (cited in Greenfield 35). Basically, this describes the cognitive faculties of the human brain differing from those of other machines by a matter of scale, not intrinsic qualities such as free will. It appears that La Mettrie’s speculations were not all that far off.

Modern ideas about will originate in the canon of western philosophy starting with St. Augustine, whose inspiration was modeled after greek philosophers such as Plato and later, Aristotle (Bertelsen 127-131)⁠. Augustine diverges from the prior greeks in developing a human entity that transcends concepts of reason alone, suggesting, as Bertelsen offers in Free Will, Consciousness and Self, that “one can become oneself only by an effort of will, and…individuation consists precisely in holding on to…one’s individuated life project” (131). This suggests an intention that exists within oneself. Kane, when defining the nature of our choices, states, “This ‘up to us-ness’ also suggests that the origins or sources of our actions are in us and not in something else of which we have no control” (5).

The reality of free will is fundamental to Western Judeo-Christian theology and it is no wonder why some of the devoutly religious are interested in contesting notions that claim that free will does not exist. This actor inside us shares many analogues with the Christian concept of a soul, i.e., a decision making entity that transcends our physical body. In an article for The National Review, Stephen Barr contends that what uniquely defines the spiritual components of human beings are a rational intellect and free will (40)⁠. This puts the very existence of a soul, as Barr defines it, at odds with deterministic human behavior. In the aforementioned article, Barr defends this concept of a soul and notion of free will (for which it depends) in an article aptly titled The Soul and Its Enemies. In the article Barr states, “If the materialists were right and freedom did not exist, moral terms would lose all meaning. One could still punish and reward to influence behavior, but not because anyone deserved anything” (41)⁠. Barr’s argument is a common one made by those who argue in favor of free will; however, its logic is inherently flawed. The logical fallacy employed in this particular argument is called an ‘argument from final consequences’, asserting that the absence of free will specifies a world that is undesirable (i.e., without a real absolute morality) and therefor is false. While the consequences of a certain condition may be undesirable, that has nothing to say about its ontological validity. We may not like to think of the world in which there is no such thing as absolute morality, but our desires do not dictate what is and what is not. Furthermore, I dispute Barr’s contention that the absence of free will precludes a concept of morality and individual responsibility—which will be discussed later on. Barr’s arguments are characteristic of the pro-free will mantra. Another example, in which Barr attempts to demonstrate how the capacity of the human brain is both unlike and beyond that of modern computers, demonstrates this:

Chess-playing programs like the one that narrowly defeated Garry Kasparov consider many trillion of positions in deciding upon a move, whereas even the greatest human player considers at most a few thousand, and usually far fewer. And yet this billion-to-one advantage in calculating is nullified by the human grandmaster’s power of understanding. (43)

Barr is trying to demonstrate how human cognition is unlike a computers logic and his argument follows that if a computers logic is deterministic, then human cognition must be indeterminate. This argument represents a perfect example of the ‘fallacy of equivocation’. While Barr notes an example of how brains and computers are different, that does not then mean that a computer is the opposite of a brain in every case. Furthermore, Barr’s premise that brains are unlike computers couldn’t be further from the truth. In fact, modern psychologists and neuroscientists fully accept a computational theory of mind, or “the hypothesis that intelligence is computation” (Pinker 76). According to Steven Pinker, former faculty member of the Department of Brain and Cognitive Sciences at MIT and currently Johnstone Family Professor in the Department of Psychology at Harvard (“Steven Pinker – About”)⁠, we should accept such a computational theory “Because it has solved millennia-old problems in philosophy, kicked off the computer revolution, posed the significant questions of neuroscience, and provided psychology with a magnificently fruitful research agenda” (Pinker 77). That is, envisioning the mind as being a computer has been very useful to us. Admittedly, this also seems to be an ‘argument from final consequences’; however, this is not Pinker’s sole justification for its existence, he is merely assessing the practical usefulness of such a model. Pragmatism aside, current observations in neurobiology, emergence, and neural networks seem to back up this notion of a computational theory of mind. Barr’s argument about understanding is very similar to an argument put forth by philosopher John Searle called “The Chinese Room” (Hirstein 35-37)3To very briefly summarize Searle’s ‘Chinese Room’: it poses a question about whether a simple process, such as a man inside a room that receives symbols on pieces of paper and then uses a very complicated instruction manual to translate those into other symbols (effectively translating from one language to the other), can really understand the language. (Pinker 93-94). Pinker, in his book How the Mind Works, defends a computational theory of mind and addresses Searle’s “Chinese Room”. What could just as easily be applied to Barr’s final sentence about the “grandmaster’s power for understanding” quoted above, Pinker directs at Searle:

…Searle is merely exploring facts about the English word understand…Science, after all, is about the principles that make things work, not which things are “really” examples of a familiar word. If a scientist explains the functioning of the human elbow by saying it is a second-class lever, it is no refutation to describe a guy holding a second-class lever made of steel and proclaim, “But look, the guy doesn’t have three elbows!!!” (95)

Barr is correct in observing a difference in how the brain processes information and how a modern computer processes information, though, asserting that the computer doesn’t “understand” does not accurately reflect the disconnects. Modern research in computer architecture has implemented what are known as Artificial Neural Networks that are inspired by the human brain’s network of nerve cells (Dayhoff 1). Such Artificial Neural Networks are able to accomplish tasks previously too complex to achieve with the past computer architecture paradigms. Neural Networks are particularly adept at fuzzy pattern recognition like recognizing faces and translating images into keywords. Even more startling is their capability for learning and readjusting without having to explicitly indicate what characteristics it is supposed to be observing—that is, learning by example not by specific instruction (10). While current artificial neural networks are able to accomplish many tasks akin to the human brain, they do differ by a matter of scale. While todays artificial neural networks may employ hundreds or even thousands of artificial neurons in their design, the human brain has neurons on the order of a 100 billion (6). In this light, the conclusion of Julien Offroy de La Mettrie over 200 years ago, regarding human cognition differing from other machines being only quantitative rather than qualitative (as cited earlier), is actually quite correct. Barr’s and Searle’s argument regarding understanding is one primarily of syntax and language, even more so however, it fails to consider demonstrable parallels in computer science and the current body of neuroscience research.

While the accepted computational model of the brain appears to be incredibly damning to the notion of free will, it only appears as a faint glimmer when put next to the blinding spotlight of the laboratory research. What neuroscientists and psychologists have discovered in the last 30 years has jostled the firm pedestal upon which free will once stood into an infirm teeter that looks likely to collapse.

Interesting Observations

By far, the most cited and referenced evidence included in scientific analysis of free will over the past 20 years is a study conducted by Benjamin Libet et al in 1983. What the startling conclusions of the study show is that measurable brain activity, otherwise known as readiness potential, can be detected several hundred milliseconds prior to the subject reporting a conscious experience of ‘wanting’ or intending to act (Libet 249-268). According to the published research, “The onset of cerebral activity clearly preceded by at least several hundred milliseconds the reported time of conscious intention to act” (249). In the study, subjects were seated in a chair and instructed to stare at a screen that had a circularly revolving dot with numbered positions along its circumference. While the subject’s brain activity was monitored via Electroencephalogram (EEG), they were instructed to wait for the dot to make one complete revolution on the screen and initiate a motor response (i.e. squeeze their hand) at any time thereafter. The subjects were stressed to be spontaneous in there action as well as to note the ‘clock’ position of the dot on the screen at the time they consciously decided to spontaneously act. The point in time the subject actually moved their finger was electronically detected and recorded in order to later be correlated with the EEG readings as well as referenced to the subjects reporting of the ‘clock’ position of the dot. In addition, other series were done in order to establish baseline measurements for later comparison (231-232)4Two other series of tests were done: one, the subjects were instructed to move their fingers at predefined intervals of the dot’s ‘clock’ position and two, the subjects were instructed to respond to a small shock supplied to the back of their hand by two electrodes. This was done to establish baselines for self-initiated acts vs. directed acts and cases were the subjects were able to pre-plan their actions and cases where they were not.. In plain words, what the results of the study indicated were that certain signature brain activity was observed in the subjects a significant amount of time before the time they reported they had chosen to act. What this could suggest is that the sensation of conscious choices may just be the mechanistic output of our brains hardwiring, making decisions before we are consciously aware of them and only creating the illusion of spontaneity and free will. The implications of such conclusions have not gone without criticism, however. In a critique of the research, Preben Bertelsen, a psychology professor at the University Aarhus, Denmark, summarizes the study as follows:

…there are a number of problems with these experiments. In one case, subjects were asked to state the moment at which they made a decision, for example, to lift a finger. The results allegedly demonstrate that cerebral neural activities (readiness potential) take place at least 350 milliseconds prior to the subject’s awareness of his/her intention or wish to act. The aim of the experiment was to demonstrate that the neural initiation of an action appears prior to the formation of a conscious plan to act (for example, to move a finger); hence the action is initiated by something other than the person’s free will. (Bertelsen 126)

In his review, Bertelsen goes on to point out what he feels are many problems within the experiment; stating the experiment falsely assumes the ability to accurately identify exactly when a subject was consciously aware of making a choice. Also, Bertelsen is critical of the experiment’s assumption that certain mental actions occur at a specific point in time, when in fact a specific decision may be the culmination of multiple operations that span over a longer period (126-127). Bertelsen is not alone in his criticisms and some methodological holes in Libet’s research have been discovered by others; however, the results of Libet’s study have opened the door to further research that attempts to fix those methodological gaps.

Since Libet’s study in 1983, much new technology for brain imaging has become available to researchers. A more recent study, headed by Chun Siong Soon from the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, Germany, used functional Magnetic Resonance Imaging (fMRI) to evaluate brain activity in subjects asked to initiate voluntary actions similar to Libet’s study (Welberg; Soon et al.)5fMRI, or Functional Magnetic Resonance Imaging is a method that uses magnetic fields (much like a traditional MRI) to monitor the functioning of organs, sometimes through monitoring blood flow (“fmri”) . In this most recent study, Soon et al set out to answer some of the many questions generated by the conclusions of Libet’s previous work (as seen in Bertelsen’s criticisms above) through the use of fMRI as well as improved methodology. The paper summarizes those issues at its onset:

First, the readiness potential…only provides information about late stages of motor planning. Thus, it is unclear whether…high-level planning stages might be involved in unconsciously preparing the decision…Second, the time delay between the onset of the readiness potential and the decision is only a few hundred milliseconds. It has been repeatedly argued that potential inaccuracies in the behavioral measurement of the decision time at such short delays could lead one to misjudge the relative timing of brain activity and intention. Third, does any leading brain activity indeed selectively predict the specific outcome of a choice ahead of time? To rule out the idea that any leading activity merely reflects unspecific preparatory activation, it is necessary to study free decisions between more than one behavioral option. (543)

As the researchers indicate, the relatively short period of time (~300ms) that Libet’s study showed preceding subjects being conscious of a choice, is not significant enough to rule out misjudgments in the relative time between different events. Also, brain activity observed prior to the subjects deciding to act may just be the brain getting ready to make a choice, not actually making the decision. Finally, the researchers indicate that the subjects need to be given multiple options in order to preclude the possibility of observed activity leading up to the decision merely reflecting getting ready to choose rather than the actual choice.

The results of Soon et al’s research was even more startling then that of Libet’s. In this experiment, subjects were asked to choose between a left or right button while a series of letters flashed on screen in front of them. After making the selection, a second screen appeared that allowed the subject to indicate the letter that was on screen during the time they became consciously aware of the left/right choice. All of this was conducted while monitoring the brain activity of the subjects with fMRI. With this method, the experimenters were able to accurately predict a subjects choice, through analysis of fMRIs, up to 10 seconds before the subjects reported being consciously aware of making the choice (Welberg)⁠. That is to say, the researchers were able to determine the subjects choice from analysis of brain activity that was apparent 10 seconds before the subjects themselves had become conscious of the choice. This earth shattering conclusion dispels the critiques of Bertelsen and others that argued that brain activity preceding the subject being aware of a conscious choice, as observed in Libet’s research, was nothing more than preparatory action leading up to the choice. If it were preparatory action, the researchers would not be able to predict what the subjects were going to choose until the moment they became consciously aware of it. Furthermore, the relatively large margin of time (up to 10 seconds) between when the researchers observed the fMRI activity and when the subjects reported being conscious of their choice, does away with the criticism that Libet’s study received of lacking precision with relation to time.
The astounding conclusions of these studies do not necessarily prove definitively that we are without free will; instead, they lend plausibility to the possibility that there may be more going on inside the massively interconnected tapestry of neural fabric beneath our skulls than previously assumed. The computational components of our brain seem to work transparently and beneath our conscious awareness. In the case of the Soon et al’s research, our conscious choices are shown to be predetermined long before we become consciously aware of them—and while naturally, we feel to be the conscious directors and conductors of our actions, we are more likely the actors following our brains deterministic commands.

Where does this leave us?

The computational model, combined with scientific observation, in some ways eerily constructs us as being nothing more than highly sophisticated robots. The implications of such a concept, as one can imagine, represents a problem for how we have traditionally regarded the actions of others. Bertelsen observes the following:

From a moral as well as a judicial point of view, the precondition for holding a person responsible for his/her own actions is that the person in question had both the freedom and the will – free will – to be capable of acting otherwise, had he/she wanted to…there is no point in demanding in therapeutic interview that the client take responsibility for his/her own life if, fundamentally, the person in question does not have free will to do so. (Bertelsen 119)

As quoted earlier, Barr recognizes a similar conclusion to Bertelsen’s when he says, “If…freedom did not exist, moral terms would lose all meaning” (41). In yet another example, Walter Glannon, in an essay entitled Neurobiology, Neuroimaging, and Free Will, summarizes the opinion of Aristotle over 2000 years ago,

In the Nicomachean Ethics, Aristotle presents the default assumption that a person acts freely (voluntarily) and is responsible for his behavior barring evidence of compulsion, coercion, or ignorance of the circumstances of action…A person can be excused from responsibility for his behavior when any one of these three conditions…described by Aristotle is present. (70)

What Barr, Bertelsen, Glannon, and even Aristotle illustrate is that our concepts of responsibility, consequence, blame, accountability and ultimately culpability, hinge on our ability to make autonomous choices that are without influence from outside coercion. They suggest that without the freedom of choice, an individual cannot subsequently be held responsible. What then can we conclude if our actions are not based upon our own choices, but rather the hardwiring in our brains (i.e. hardwiring which is inherited from our parents and further shaped by our environment—both of which factors are outside of our control)? Certainly, one might conclude that empirical observation (such as Libet et al’s and Soon et al’s research) shows that our actions are not truly “up to us” (to use Kane’s description), rather they are a product of our computational and deterministic subconscious. In such a scenario, what, if any, actions could an individual be held responsible for? Glannon’s characterization of Aristotle’s philosophy (quoted above) suggests that responsibility requires independence from outside compulsion or coercion. That is to say, if a pilot has a gun pressed against his neck by a hijacker, we can assume that his normal will and actions are subverted by an external force and he is therefor not ultimately responsible for his actions. Though, further interpretation complicates this scenario and can produce the opposite conclusion. As example, Bertelsen beautifully illustrates:

A pilot who, when a gun is pointed at his neck, carries out the orders of a hijacker, may be said to be acting of his own free will, so long as his reasoning tells him that this is the proper thing to do under the circumstances; whereas a pilot who, in a state of panic, attempts unreasonably to resist the hijacker, could not be said to have acted freely. His action, we would argue, was prompted by an inner compulsion of insanity. (119)

What if, as Bertelsen demonstrates, the “outside force” that influences one’s action comes from inside one’s own brain? Do congenital mental illnesses, inherited at birth and outside the control of the individual, pardon one from responsibility? If so, and factors from inside our brain can excuse our responsibility, what of our actions can we be responsible for? What is us and what is our brain? In the light of Soon et al and Lebit et al, even the most elementary choices seem to be caused fundamentally by our subconscious; where then, do we fit in?

When grasping the implications of determinism as it relates to free will, one can quickly get lost in esoteric concepts that only are just as much an inquiry into language as they are inquiries into reality. However, the answer to many of these questions is simple and more easily derived by evaluating a more accurate view of the self (i.e. What is us and what is our brain). Asserting that the absence of free will removes responsibility is false and based on a misguided view of the self. As an example, if we use an analogy of a malfunctioning machine, the previous assumption of responsibility becomes obviously absurd. What if, because of the weakness of its construction or deterioration to age, a crane collapses or malfunctions and harms a number of innocent people. Are we concerned with whether or not the construction crane had a choice about such actions? Does that make a difference in how we respond to the tragedy? Does it make the machine any less responsible? Of course not. What if instead of a construction crane it was a robotic device designed by the military to be used in war. What if it fails to identify the difference between friend and foe and accidently kills innocents? Is the robot any more or less responsible than the construction crane? Again, of course not. What about the sophistication of the robot’s intelligence? Does a more sophisticated, though ultimately deterministic machine, assume more responsibility for its actions? No. Even a robot that is infinitely advanced should not change our view in the assumption of its culpability. Its circuits and programs are developed and engineered by humans. Even though it may appear to be conscious and able to make choices, it is still ultimately the product of what made it. Taken even further, we are those infinitely complex robots and we are ultimately responsible for what our brains do. We are our brains, just as the robot is its programming and collection of circuits. While the robot is the product of human engineering and design, we are the product of evolutionary design, our features developed and shaped through millions of years of natural selection. While such a view may not mean that we “deserve anything”, as alluded to by Barr (41)⁠, it still does not obscure us from responsibility. Whether our actions are entirely deterministic, entirely indeterministic, or even somewhere in between; ultimately, we (whatever ‘we’ means) retain responsibility. Our bodies are responsible for the actions executed by the brains they carry around just as the crane is responsible for any action rendered by its mechanics. If the reality of our behavior is deterministic, it does not then mean that we cannot be held responsible for our actions. Take the case of the crane or any other piece of machinery; when it fails or proves hazardous to use, we don’t punish it for failing; instead, we take the necessary action to prevent it from doing additional harm. We fix it or decommission it. Such an attitude might prove useful in evaluating our current judicial system or in our everyday interactions with other human beings. Realizing that we are nothing more than highly sophisticated robots humbles our self-image and allows us to be more sympathetic to the actions of others.


Barr, Stephen M. “The Soul and Its Enemies.” National Review. National Review Inc., 2007. 40-43.

Bertelsen, Preben. Free will, consciousness, and the self : Anthropological Perspectives on Psychology. New York: Berghahn Books, 2005.

Dayhoff, Judith E. Neural Network Architectures : An Introduction. New York, N.Y.: Van Nostrand Reinhold, 1990.

“fmri – Definitions from” 31 Jul 2008 <>.

Glannon, Walter. “Neurobiology, Neuroimaging, and Free Will.” Midwest Studies in Philosophy 29.1 (2005): 68-82.

Greenfield, Susan. Journey to the Centers of the Mind: Toward a Science of Consciousness. New York: W.H. Freeman, 1995.

Hirstein, William. On Searle. Belmont, CA, USA: Wadsworth/Thomson Learning, 2001.

Laplace, Pierre Simon, and Andrew I. Dale. Philosophical Essay On Probabilities. New York: Springer-Verlag, 1995.

Libet, Benjamin. Neurophysiology of Consciousness. Boston: Birkhäuser, 1993.

Pinker, Steven. How the Mind Works. New York: Norton, 1997.

Soon, Chun Siong et al. “Unconscious Determinants of Free Decisions in the Human Brain.” Nature Neuroscience 11.5 (2008): 543-545.

“Steven Pinker – About.” 5 Aug 2008 <>.

Welberg, Leonie. “Neuroimaging: Free will?.” Nature Reviews Neuroscience 9.6 (2008): 410-411.

Footnotes   [ + ]

1. Those exceptions would be the discoveries made in the last 60 years in quantum mechanics. This paper will not cover quantum mechanics as it relates to free will as such quantum effects only exist in systems on an extremely small scale. Micheal Shermer summarizes University of Colorado physicist Victor Stenger in an article for Skeptic Magazine, citing that “Stenger computes that the mass of neural transmitter molecules and their speed across the distance of the synapse are about two orders of magnitude too large for quantum effects to be influential. There is no micro-macro connection.” That is to say, the physics involved in the electro-chemical operation of the human brain takes place at a scale above which quantum effects have been observed.
2. It should be noted that one could argue that the method of science is not equipped to answer these questions. Refuting such notions is beyond the scope of this paper and so, this paper will assume that the scientific method represents an appropriate method for discovering truth.
3. To very briefly summarize Searle’s ‘Chinese Room’: it poses a question about whether a simple process, such as a man inside a room that receives symbols on pieces of paper and then uses a very complicated instruction manual to translate those into other symbols (effectively translating from one language to the other), can really understand the language. (Pinker 93-94
4. Two other series of tests were done: one, the subjects were instructed to move their fingers at predefined intervals of the dot’s ‘clock’ position and two, the subjects were instructed to respond to a small shock supplied to the back of their hand by two electrodes. This was done to establish baselines for self-initiated acts vs. directed acts and cases were the subjects were able to pre-plan their actions and cases where they were not.
5. fMRI, or Functional Magnetic Resonance Imaging is a method that uses magnetic fields (much like a traditional MRI) to monitor the functioning of organs, sometimes through monitoring blood flow (“fmri”)

Simple L2/L3 Load Balancing with a Cisco L3 Switch

Thanks to Tony Kapela for a cool load balancing technique that can be executed on a Cisco L3 Switch. The specific switch in this example is a Cisco 3550 running IOS version c3550-ipservicesk9-mz.122-35.SE. This should work on most current routing platforms (apparently including l3 switches).

The 3550 edge switch, to which the webserver is directly attached, has two discrete L2/L3 Fast Ethernet connections to the webserver. The webservers incoming traffic is automatically balanced across the two links/paths. This is accomplished with two static routes that point to a single IP ‘loopback’ on the webserver however they have different next hop addresses that correspond to the IP addresses of the two NICs on the far side of the transit network. The web (or whatever) service is then bound to the ‘loopback’ address on the server and incoming traffic is routed up to that address appropriately. He then uses RTR (Response Time Reporter) on the edge 3550 to monitor the service availability using each discrete path. If a service becomes unresponsive above a certain threshold over one of the discrete paths, he takes the route down. Check out the config below. Assume we have a webserver that’s configured with it’s web service bound to and it’s two NIC cards with and, respectively. The webserver’s default gateway would be

The actual interfaces on the switch

interface FastEthernet0/30
description [cust][fa0/30][vlan600][LoadBalance Server - Link A]
switchport access vlan 600
switchport mode access
load-interval 30
mls qos trust dscp
spanning-tree portfast
interface FastEthernet0/31
description [cust][fa0/31][vlan600][LoadBalance Server - Link B]
switchport access vlan 600
switchport mode access
load-interval 30
mls qos trust dscp
spanning-tree portfast
interface Vlan600
description [vlan600][LoadBalance Webserver]
ip address
no ip redirects
no ip unreachables
no ip proxy-arp

Identify what to look at with RTR
rtr logging traps

rtr 10
type tcpConnect dest-ipaddr dest-port 80 control disable
timeout 10000
owner loadbal-srv-a
rtr schedule 10 life forever start-time now

rtr 11
type tcpConnect dest-ipaddr dest-port 3306 control disable
timeout 10000
owner loadbal-srv-a
rtr schedule 11 life forever start-time now

rtr 20
type tcpConnect dest-ipaddr dest-port 80 control disable
timeout 10000
owner loadbal-srv-b
rtr schedule 20 life forever start-time now

rtr 21
type tcpConnect dest-ipaddr dest-port 3306 control disable
timeout 10000
owner loadbal-srv-b
rtr schedule 21 life forever start-time now

Specify the track objects

track 10 rtr 10
track 11 rtr 11
track 20 rtr 20
track 21 rtr 21

Then, if one service is down, take down the whole track object
track 1 list threshold percentage
object 10
object 11
threshold percentage down 99 up 100
track 2 list threshold percentage
object 20
object 21
threshold percentage down 99 up 100

And finally, here are the static routes to the loopback on the server
ip route name Loadbal_svr_a track 1
ip route name Loadbal_svr_b track 2