Getting in touch with your inner log()

February 18, 2010

Neuroscience is on the rise. Every other week you hear about a new discovery, or simply the next step in computing, gaming and… board-games (???).

I have always been fascinated by the human brain (well, consider the source being fascinated :-P), so when I saw a class opening up on the subject in my university, I immediately signed up for it.
The basic goal of neuroscience is to reverse-engineer a complete blueprint of the brain. I am not simply referring to its actual physical makeup:

The interesting part is how it actually works. Its API if you will, or “instructions”. One of the main topics covered in this class was sight. Most of us are familiar with the “lies-to-children” version of how vision works:

(click to enlarge)

Well, the human brain does perform some sort of transformation, but it is a much more useful one.
To simplify matters, let’s consider the surface of the human eye (more specifically, the cornea) to be a round disc (like a vinyl music record). I assume all of you are familiar with Cartesian coordinates, or more commonly known as X-Y coordinates. Well, Cartesian coordinates are very convenient when working with square surfaces (like a map or graph), but not so well when the surface is circular. A more appropriate approach would be to use a Polar coordinate system. Instead of the two values X and Y, we have R and θ (Theta); R being the length of the straight line between the center of the circle and the point, and Theta indicating the angle between that straight line and the horizon. For example:

Everybody knows that the right part of the brain controls most of the left side of the body and vice versa. Well, in the field of vision it’s a bit more complicated. The left side of the brain does not handle visual information from the right eye, but from the right field of vision. So it will handle data from the right side of the right eye, and from the right side of the left eye. But that is less relevant for what I want to talk about, and we could just as well assume we are all Cyclops, i.e. we only have one eye (the second eye is important for detecting depth, but that’s a whole other subject by itself). That means the left side of the brain handles information from the right side of that eye and vice versa.

The part which is first in line to handle information from the eyes and is in charge of sending the most “raw” data about what we see to the rest of the brain is called the visual cortex. Again, we simplify its surface area to make things clearer. With that in mind, one could consider the visual cortex to be rectangular.

So suppose a beam of light hit our eyes. What happens now? The same image will be replicated on the visual cortex, but it will do so after undergoing a transformation along the way. That transformation will take the parameters R and Theta, and will display them on the visual cortex, in Cartesian coordinates – the Y value will be equal to Theta (in radians), and the X value will be equal to log(R). That’s right. The function log(). The one in your calculator.

After you get over the initial shock of realizing that your brain is carrying out log calculations as you are reading these words (Yes, even now. And now. :-P), you start thinking – “why log?”; “what makes it so special?”; The only possible answer comes to mind – evolution.

First you must realize that we do not see 3 dimensions. Sure, we’re all experts at not bumping into things, but we have no receptors for depth. Our eye is flat, and everything we see can be considered to be a 2D image, just like the ones on television. So how come we still have fighter pilots? Our brain uses very elegant algorithms to determine parameters such as depth from the image it receives. One of the algorithms relies on this log transformation – it helps us detect movement. Consider a 2D surface (we’ll consider a circular 2D area – the surface of our eye). A 2D object on that flat surface can transform in three ways:

(if you don’t see any animation, click on the images)

Scaling:

Rotating:

and Transitioning:

Think about what each of these transformations mean in the real world. Suppose something is becoming bigger and bigger in your field of vision… you should probably duck for cover. Rotation is another good indicator that something is being hurled at you, or simply moving in a manner which you should pay attention to, unless you want to finish your day as fast food. Meaning you would probably want to decide to duck/run/attack as fast as you can. That’s where log comes in. Suppose you were the programmer of the human brain. And you need to build an algorithm which detects objects growing in size. Remember – we don’t see things getting closer. We just see them getting bigger. Look at the animated scale gif. Your algorithm needs to detect that the same object is growing in all directions. That will probably not be such a time efficient algorithm. The same goes for detecting rotation (we will get to transition later). This is where the beauty of log comes in. The mathematical qualities of this function dictate that if someone throws a wrench in your direction, the image shown on your visual cortex will resemble this:

(left side – what your eye “sees”. right side – what your visual cortex “sees”.)

In general, scaling will be transformed into movement of the same sized object along the X axis, while rotation will be transformed into movement of the same sized object along the Y axis. The algorithm for detecting such change is much more time efficient. The reason our brain uses the log function stems simply from evolution – it’s a good way to detect danger, so we use it (when I say “we”, I obviously don’t mean just human beings. This didn’t happen overnight :-)).

I haven’t discussed the last possible movement – transition. But we get over that obstacle using our eyes themselves. Place your finger between you and your monitor. Focus on your finger. Now start moving it in front of you. Almost instinctively, your eyes will follow your finger, leaving it in the middle of your field of vision, effectively nullifying its transition. So that’s how we deal with transition.

Bear in mind: “If the brain were so simple we could understand it, we would be so simple we couldn’t.” – Lyall Watson


Why do we dream?

February 20, 2009

Some people claim that our dreams are the manifestations of our subconscious, i.e. the brain’s way of telling us what we “really want”. However, I see no logical reasoning behind this claim. In reality, I think it mimics the same way of thinking that leads to religion, astrology, new-age “medicine”, and the likes – human beings like to “personify” things. This claim is explained in Richard Dawkin’s book, the God Delusion.

When giving our dreams meaning, we personify our brain, giving it a separate entity from our own, believing that it somehow “guides” us in our life, bestowing on us its “mystic” wisdom. It is a comforting thought, since it provides us with hope that there is a bigger plan, or that we have a guiding force in our life, a guardian angel if you will, that watches over us and guides us. I see it as a romantic idea, wishful thinking. A human need.

However, scientific doctrine aims to remove humans from the equation, reaching conclusions which do not rely on the human observer. Therefore, if there is no logical explanation for this subconscious, which (or maybe I should say “who”?) tells us what we should do with our life, but only through dreams that we need to interpret using unscientific methods, i.e. intuition, I must look for an explanation which relies on known scientific axioms.

An axiom which immediately comes to mind is evolution.

evolution

I will not go into the whole evolution vs. intelligent design debate, since you can find plenty of websites which discuss it, and since frankly I could just as well debate evolution vs. the Flying Spaghetti Monster.

If you “believe” in evolution, and more importantly, if you understand it, you realize that each living organism on this planet looks the way it is today since, quite simply, all the other ways were not good enough. In other words – its current state is dictated by whichever changes that allowed it to survive better. This survival is not an inner mechanism or something that drives it. There is no “force” which makes evolution exist. Things which are best suited to survive do just that because they are best fitted to do so. Kind of recursive 🙂

So let us go back to the title of this blog entry – “Why do we dream?

dreaming_toc

I offer an evolutionary point of view, combined with computer science thinking.

The human brain is composed of connections. The more we think in a certain way, the more certain connections become stronger, reinforced. That is why astronauts undergo extensive underwater training before going on missions. It takes time for the human brain to adjust to new points of reference in space. Astronauts in microgravity usually lose their sense of direction and feel uncoordinated or clumsy. Because inner ear and muscular sensors seek terrestrial clues, astronauts must learn to rely on visual cues for balance and orientation. But even visual cues can be confusing – astronauts in microgravity need to adjust to the fact that up and down don’t really matter in space like they do on Earth. They need to “force” their brain to think differently, and that takes time.

The same goes for human emotions. If for example you are a person who is always depressed, you will not be able to change overnight. Changing your way of thinking and behaving will take time, since you are “re-wiring” your brain (however, if you want a “quick fix”, you can always get a brain pacemaker transplant – http://en.wikipedia.org/wiki/Brain_pacemaker). The more you think differently, the more these connections become stronger, and the other become weaker. It is a very elegant code if you think about it from a programming point of view. It makes itself more efficient and streamlined, according to the relevant needs. Human emotions might not relate to evolution so clearly, but if we consider other brain functions such as moving about, breathing, or recognizing a lion in the bushes – it is vital for our survival that we carry out these actions successfully and as quickly as possible. Our brain has evolved in a manner which makes sure that whatever we do the most – i.e. whatever we need to do to survive more, we do as efficiently as possible. Assuming of course that whatever it is that we do the most is beneficial for our survival – perhaps not so true in the 21st century (for instance, I don’t think that reading this article improves your chances for survival, although I’ll be flattered if you think so), but most definitely true in most of our evolution, which took place in the wild, in much more harsh conditions.

lion31

[peek-a-boo]

So the connections in our brain constantly grow stronger in various ways, according to what we do and how we think. It has been known for quite some time that the augmentation of these connections mostly happens when we sleep, as corroborated by a recent study (http://www.eurekalert.org/pub_releases/2008-01/uow-sbc011808.php). This is known as plasticity (http://en.wikipedia.org/wiki/Neuroplasticity). It also makes sense – as every person with a computer knows – it isn’t very smart to install/uninstall computer programs while they are running.

So why do we dream? Well, my hypothesis is that our dreams are a “system-check” carried out by our brain. When you go to sleep, your brain shifts synapses in your brain, making some connections stronger, some weaker and perhaps even creating new connections or completely severing old ones. After making each change – it is a good idea to run a system-check, don’t you think? That is also why our dreams seem so real. As far as we are concerned, the messages in our brain when we dream are identical to the ones we get when we are awake – or else it wouldn’t be a very good systems-check, now would it? Also, that is why dreams relate to memories – that is the information the brain has available to use for its systems-check. Furthermore – dreams usually relate to recent events, since those are usually the areas which get modified.

But the most important part in my hypothesis is why I think my argument is true. After all, I can find lots of explanations for why we sleep, why is mine more logical than the rest? Well, I suggest that this systems-check is a direct result of evolution.

Consider that our brain evolved, and we started making more and more connections. Obviously, these connections were augmented in our sleep, since if you started making changes in your brain while you are awake and running away from a cheetah – well, let’s just say your survival chances were not very good.

But some individuals also started having dreams. Those dreams were the systems-check carried out by their brain, making sure that all of these new connections were OK.

So those who had dreams one-upped their fellow tribe-members evolutionary-wise. They had less chance of suffering the consequences of “faulty-wiring”. And we all know how one bug in a program can wreak havoc…

blue_screen_of_death1


On the Abundance of Information

June 29, 2008

If I lived 50 years ago, I think I would have been very frustrated (although I probably wouldn’t be, since I wouldn’t know any better. So allow me revise my previous statement: if you were to send me to live 50 years in the past, I would have been very frustrated. Wait, should I say “would have been” or “would be”? Grammar is always so difficult when you time travel 🙂 )

The reason I would be so frustrated is the lack of available information. We live in an age, that if unless you are looking for something very confidential, you are able to find the information you need and fast. How does the old saying go? If you looked for something on Google and came up with no results, then it means you have a very specific fetish 😉

However, this plethora of data has its own inherit problems, the greatest one being able to determine which information is relevant and correct. A lot of people claim that this new abundance of information is actually a “wolf in sheep’s clothing”, saying that you can’t really trust anything you read, and nothing is reliable.

To which I reply: “baloney!”

Lets review the evolution of information:

1) Information is not free and it is controlled (i.e. the writers and editors of encyclopedias, newspapers, etc.) by a small group of people, and the only way to attain this information is to be born into royalty or other form of high class – except for newspapers, which is usually a means for the small group to convey their idea of the truth to the masses (pretty much the way things were up until the mid 20th century).

2) Information is free, monitored by a bigger group of people (i.e. gives room to different opinions), but attaining it is cumbersome and not easily available to everyone (think 1950’s-1980’s).

3) Information is free, monitored by everyone and for everyone, which results in a great deal of unreliable information.

I still prefer the third option. Allow me to explain.

The way I see it, the third option is the best because it puts the power in the hands of everyone. Sure, back in the 1950’s the information you received was considered more reliable, but your choices were limited. I prefer that you let me decide what seems reliable and what doesn’t, instead of giving me only one source of information, which you decided to be true.

Yes, there’s a lot of stupid people out there saying plenty of stupid things, passing them off as intelligent data. But it doesn’t mean there isn’t also some useful information out there. For instance, right now I’m a CS (Computer Sciences) student in Tel-Aviv University. Wikipedia and Google are my best friends. If I could add them to my facebook profile, I would :-). Whenever I come across any new mathematical/computer-related term, I simply look it up, see the definition, or read an answer from a forum addressing what I need to find out (e.g.: I need to convert a Double expression to String. This is a relatively simple thing to do in Java, but when you are starting out, you still don’t have all of the syntax down. A simple search of: “java double to string” in Google will yield the proper method) and carry on.  Why, even if I’m in the middle of a lecture, and the teacher mentions something which I don’t understand, I simply use my iPhone to look up the relevant information, get my bearings, and carry on with the lesson. Try to imagine the same situation 10 years ago, when the internet was just starting to grow. Now try to imagine it 50 years ago :-).

Since the information I’m looking for is mostly mathematical, the degree of “stupidity” I encounter is minimal. Problems start appearing when you are looking for information other than mathematical definitions. Let us say I was looking for a biography on Albert Einstein. Well, the first place I’d probably go to is Wikipedia. And it’s not a bad place to start. It’ll definitely be Google’s first result. People badmouth Wikipedia all the time, but personally I think it is a wonderful source of information. I wouldn’t use it for scientific research (since it is not 100% accurate), but it is a highly reliable and relatively very accurate source of information. The problem is that once you start looking for information such as biographies, history, art, society, etc., the reliabilty of the information decreases. And not because people have got their facts wrong (which is also a problem, but usually not in the websites I’m talking about), the problem is people are confusing facts with opinions.

When I look up the definition of what is a “Kernel” in linear algebra, there isn’t too much room for opinions. It’s a pure mathematical definition, and the only difference (barring any mistakes in the definition itself) between the various sources of information could be in the manner in which the term is explained – which is very useful, since people understand things in different ways (now compare that with the 1950’s, where you had only one textbook, and if you didn’t happen to think like the person who wrote it, your studies just became a whole lot more difficult). However, if I look up a controversial issue, like for example: “Jerusalem”, the information will not be accurate. I don’t mean that I’ll necessarily see false information, but rather that specific information will be ommitted or mentioned, according to the writer’s point of view. And people consider that a major issue. But now comes my next point – how is that different from the professor writing the article for Britannica? Doesn’t he also have his own opinions, prejudices and view of the world? His only advantage is that he is an expert on the subject. But does that mean he’ll be objective?

This raises the great question of “what is the truth?”, but I think I’ll overextend myself trying to tackle this subject. Maybe another time.

To sum matters up – I believe that the more information, the better. Smart search engines like Google and its ilk, as well as good ol’ common sense, will help us seperate the relevant information from random ramblings of nitwits.


Procrastinators – unite tomorrow!

June 19, 2008

Yes, I haven’t written in a while. I have something in the works, but once again, the post-lecturers’-strike-university rears its ugly head, consuming every minute of my free time.

So in the meantime, enjoy this fascinating article about how the universe might not only be described by math, but made up of it as well!


The misconception of science, or “what happens when a duck quacks in an opera house?”

May 25, 2008

I am troubled by the manner in which the majority of people assume that the things they hear or read are correct, without subjecting them to any sort of scientific method.

What do I mean exactly? well, one prime example is the story of the non-echoing quacking duck.

For those of you who are unfamiliar with this “hypothesis”, according to all of the wonderfully time-consuming and misinforming PowerPoint presentations we receive in our inbox from people who we consider to be our friends, a duck’s quack does not echo. Now, this statement has all the makings of a psuedo-scientific fact. It involves, hold on to your hats – “sound waves”. As we all know, sound waves are a very “scientific” subject, yet a relatively simple concept to grasp, since we’ve all seen how a pond reacts to a stone thrown inside it. In addition, this “theory” refers to a specific species in the wild kingdom. Since there are so many animals out there, surely one of them interacts with sound waves in this unusual manner. For some reason, when people receive facts that seem to be scientific, something in their brain tells them to automatically treat this new information as correct.

However, when we examine this statement a little closer, using real scientific methods, we see the absurdity of this so called “scientific fact”. Yes, physically, it is possible to create a sound wave which does not echo. You don’t even have to study physics to understand this, you just need to have a basic understanding of the concept, and some good healthy logic. However, it is quite a leap of faith to believe that all the ducks in the world (every last one of the little quacking bastards), in all echo-creating conditions, have the ability to produce the exact sound (e.g. exact wavelength and frequency) that will not echo.

However, I do not wish to dwell specifically on the matter of the quacking duck, but more on the concept of how people tend to agree with what they read/hear without taking a moment to consider what they are being told, and examining this new information with their own brain for a change.

I am not saying that we should question everything. That seems like quite a tedious way to go about life. However, I believe that every person should strive to develop the ability to pick up on psuedo-scientific jibberish such as non-quacking ducks.

After all, most of mankind’s greatest scientific achievements came as a result of someone, somewhere, saying: “this can’t be right…”: Charles Darwin‘s Theory of Evolution (“Do we have to accept the story of how we came to be as told by a group of non-sceintific religious zealots?”), Albert Einstein‘s Special and General Theories of Relativity (“Who decided that time, mass and space have to be constant in all systems?”), and of course, Nicolaus CopernicusDe revolutionibus orbium coelestium (On the Revolutions of the Celestial Spheres), which I perceive to be the catalyst of modern science as we know it, since Nicolaus’ question was not only humble in nature (“Why must we believe that man is the apex of creation?”), but also led to a less romantic and more objective weltanschauung, which is key when one is attempting to analyze the world in a scientific manner.

But perhaps the problem is not in the manner in which people think. Perhaps the problem is thinking at all. Sadly, I tend to believe that the problem lies in the latter.