Tuesday, August 30, 2011

Featured Quote


An error does not become truth by reason of multiplied propagation, nor does truth become error because nobody sees it. Truth stands, even if there be no public support. It is self sustained.”

Saturday, August 27, 2011

UK DOESN'T VALUE COMPUTER SCIENCE?

Google Chairman Eric Schmidt is in for some backlash. While giving a speech in Edinburgh, Scotland, he talked about the unsatisfactory job the United Kingdom is doing in teaching its youth about computer science. He talked about how innovative the island nation has been in the past, like inventing television and computers. Schmidt seems to think that Silicon Valley should have more competition from across the pond.

Rather than a swarm of controversy, Schmidt was probably hoping to cause a wake-up call for Britons. Being at the heart of Google, and seeing the power of computer science firsthand, it is understandable that he is disappointed with his view of Britain's progress in the field of information technology. A person in his position gets to see just how wonderful the internet can be. He understands the benefits of fluency in computer programming. School administrators, however, might not have the same perspective. They deal with laws, regulations, disciplinary matters, and budgets. Admittedly “impolite,” Schmidt is concerned that UK schools focus too much on humanities and not enough on technical subjects. Ewan McIntosh asserts that the education system of Scotland and that of England are two very different matters. Perhaps Schmidt is only dissatisfied with England, and he is applying his observations to the whole island.

Friday, August 26, 2011

LISTENING TO MUSIC AT YALE


This course is called “Listening to Music” and is taught by Professor Craig Wright at Yale. While this course is focused on classical music, there are occasional references to modern styles such as hip-hop. The goal is to give the student “aural skills” necessary to understand and appreciate composition of music in the West. If you are interested in writing music, playing an instrument, or having an intellectual discussion over a glass of wine, this is a good place to start.

This course includes the following topics: musical genres, rhythm, melody, notes, scales, harmony, chords, sonata-allegro form, fugue, Benedictine chant, Baroque music, Bach, Mozart, Beethoven, Mahler, Ravel, opera, and musical style.

Tuesday, August 23, 2011

Featured Quote


What we find in books is like the fire in our hearths. We fetch it from our neighbors, we kindle it at home, we communicate it to others, and it becomes the property of all.”

Friday, August 19, 2011

DOES THE BRAIN USE PROCEDURAL GENERATION?

Have you ever been surprised by a scent that suddenly brings back a long lost memory? It can be astounding how detailed memories are recalled just because of a single stimulus. The fragrance of a home-cooked meal can trigger a barrage of childhood memories. Looking at an apple, you can imagine what it tastes and smells like, perhaps remembering cold cider quenching your thirst on a summer afternoon. Someone recovering from mental trauma might hear a loud noise and be instantly returned to whatever calamity they suffered in the past. How does this process of remembering past events and scenes work? Nobody knows for sure. The brain is a complicated organ, and it is possible we may never fully understand its inner workings. But the process is reminiscent of procedural generation.

In case you have never heard of it before, procedural generation is a way of creating lots of detail in a virtual object without taking up a lot of computer memory. Fractals are good examples of this. Take a look at the “Koch snowflake” at right for a rough understanding of this concept. First, you start with a triangle. You might call this starting point the “seed” or “input.” The “procedure,” or “algorithm,” involves adding smaller triangles to the sides of the original one. Once you have repeated the procedure an infinite number of times, you have then “generated” the Koch snowflake. In its finished form, the snowflake has infinite detail and a perimeter of infinite length. Now imagine trying to save a perfect Koch snowflake in JPEG format. Since the perimeter is infinitely long, it would take an infinite number of pixels to represent the outline alone. But computers have finite/limited memory capacity. However, if you write some computer code to generate an image of the snowflake in real time, as you are viewing it, that code can be saved on a hard drive. You can zoom in as much as you want, and the code could keep generating just enough detail to fit on the screen.

Perhaps human memory works in a similar manner. Try as we might, the majority of humans cannot recall scenes with perfect detail. If you create a mental image of your kitchen, and focus on it, the level of detail is pitiful when compared to a high-resolution photograph of the same room. You can focus on smaller areas of the mental image, but the resolution of the overall picture is quite limited. This is quite similar to the fractal situation mentioned above. The brain only remembers detail in parts of a picture on which it is focusing. Humans (at least, most of us) cannot store the same amount of information with the same reliability that a computer can. It seems unlikely that we would store images pixel by pixel like digital photos. Maybe the brain takes in simple stimuli, such as smells or symbols, performs some complex procedure, and generates rich memories on the fly.

On this framework, memories are not stored data, but rather impromptu experiences generated when stimuli are processed through intricate neural pathways. The brain might only store basic building blocks of experience that constitute any and all memories. For instance, the colors we remember might be stored for future use like a painter's pallet. While flavors, aromas, sensations, and words might be kept for instant access within the brain, their combinations only exist in the transient experience of remembering a scene from one's life. When someone mentions “sunset,” you can instantly imagine a red and pink horizon with the orange sun sinking beyond the edge of the world. Maybe such a picture only exists in your brain for the few seconds that you imagine it. What the brain actually stores is some algorithm for recognizing the word “sunset” and pulling together the right colors and shapes to recreate a familiar picture.

Neuroplasticity is a well-known phenomena in which the brain “rewires” itself as part of the learning process and in reaction to incoming stimuli. Around the clock, your brain's neurons are changing their connections to one another: strengthening some connections, cutting off others, generally improving the neural network to perform whatever functions are needed. If you practice pitching a baseball often and long enough, your brain will organize itself to control your arms better for the purpose of throwing the ball. As you learn a second language, your brain is changing its own microscopic structure to process the additional vocabulary. What if this rewiring is a way of tweaking the algorithms that generate memories? Say an aviation enthusiast takes a flight that goes horribly wrong. The plane crashes and the enthusiast barely survives. Before the flight, the individual associated planes with positive emotions and aspirations of soaring like a bird. Saying “airplane” to this person before the accident might have triggered memories of the Wright brothers or looking at the ground from high up. After the crash, the individual's brain has rewired itself. Now, saying “airplane” might trigger memories of the crash and struggling to stay alive, accompanied by negative emotions like fear. Did the individual forget about the Wright brothers? Certainly not. The “algorithm” (that is, the pathways in the brain) has been altered so that the same stimulus,or “seed” (the word “airplane”), is processed differently than before, generating different memories.

Monday, August 1, 2011

MORE CALCULUS AT MIT

If you have gained competence in single variable calculus, and want to learn more, then you might try studying “Multivariable Calculus” from MIT. This course is taught by Professor Denis Auroux. The main focus here is on vector calculus and working with multiple variables. As is usual, you have access to transcripts, lecture notes, exams, and assignments in addition to the lecture videos.

This course includes the following topics: dot products, determinants, matrices, parametric equations, Kepler's Second Law, partial derivatives, least squares, second derivative test, chain rule, Lagrange Multipliers, partial differential equations, polar coordinates, change of variables, vector fields, path independence, flux, spherical coordinates, divergence theorem, line integrals, Stokes' Theorem, and Maxwell's Equations.



Search Posts

Loading