Google

Thursday, January 10, 2008

Voltage clamp

Voltage clamp
The voltage clamp uses a negative feedback mechanism. The membrane potential amplifier measures membrane voltage and sends output to the feedback amplifier. The feedback amplifier subtracts the membrane voltage from the command voltage, which it receives from the signal generator. This signal is amplified and returned into the cell via the recording electrode.The voltage clamp technique allows an experimenter to "clamp" the cell potential at a chosen value. This makes it possible to measure how much ionic current crosses a cell's membrane at any given voltage. This is important because many of the ion channels in the membrane of a neuron are voltage gated ion channels, which open only when the membrane voltage is within a certain range. Voltage clamp measurements of current are made possible by the near-simultaneous digital subtraction of transient capacitive currents that pass as the recording electrode and cell membrane are charged to alter the cell's potential. (See main article on voltage clamp.)

Electrophysiology

Electrophysiology
Electrophysiology is the study of the electrical properties of biological cells and tissues. It involves measurements of voltage change or electrical current flow on a wide variety of scales from single ion channel proteins to whole tissues like the heart. In neuroscience, it includes measurements of the electrical activity of neurons, and particularly action potential activity.

Long-term

Long-term
The storage in sensory memory and short-term memory generally have a strictly limited capacity and duration, which means that information is available for a certain period of time, but is not retained indefinitely. By contrast, long-term memory can store much larger quantities of information for potentially unlimited duration (sometimes a whole life span). For example, given a random seven-digit number, we may remember it for only a few seconds before forgetting, suggesting it was stored in our short-term memory. On the other hand, we can remember telephone numbers for many years through repetition; this information is said to be stored in long-term memory.
While short-term memory encodes information acoustically, long-term memory encodes it semantically: Baddeley (1966)[2] discovered that after 20 minutes, test subjects had the greatest difficulty recalling a collection of words that had similar meanings (e.g. big, large, great, huge).
Short-term memory is supported by transient patterns of neuronal communication, dependent on regions of the frontal lobe (especially dorsolateral prefrontal cortex) and the parietal lobe. Long-term memories, on the other hand, are maintained by more stable and permanent changes in neural connections widely spread throughout the brain. The hippocampus is essential to the consolidation of information from short-term to long-term memory, although it does not seem to store information itself. Rather, it may be involved in changing neural connections for a period of three months or more after the initial learning.
One of the main functions of sleep is thought to be to improve consolidation of information, as it can be shown that memory depends on getting sufficient sleep between training and test, and that the hippocampus replays activity from the current day while sleeping.

Foundation of the science

Foundation of the science
On September 11, 1956, a large-scale meeting of cognitivists took place at the Massachusetts Institute of Technology. George A. Miller presented his "The Magical Number Seven, Plus or Minus Two" paper while Noam Chomsky and Newell & Simon presented their findings on computer science. Ulric Neisser commented on many of the findings at this meeting in his 1967 book Cognitive Psychology. The term "psychology" had been waning in the 1950s and 1960s, causing the field to be referred to as "cognitive science". Behavioralists such as Miller began to focus on the representation of language rather than general behavior. David Marr's proposal of the hierarchical representation of memory caused many psychologists to embrace the idea that mental skills required significant processing in the brain, including algorithms.

Later localizationists

Later localizationists
Studies performed in Europe by scientists such as John Hughlings Jackson caused the localizationist view to re-emerge as the primary view of behavior. Jackson studied patients with brain damage, particularly those with epilepsy. He discovered that the epileptic patients often made the same clonic and tonic movements of muscle during their seizures, leading Jackson to believe that they must be occurring in the same place every time. Jackson proposed a topographic map of the brain, which was critical to future understanding of the brain lobes.
Broca's area and Wernicke's area.In 1861, French neurologist Paul Broca came across a man who was able to understand language but unable to speak. The man could only produce the sound "tan". It was later discovered that the man had damage to an area of his left frontal lobe now known as Broca's area. Carl Wernicke, a German neurologist, found a similar patient, except that this patent could speak fluently but non-sensibly. The patient has been a victim of a stroke, and could not understand spoken or written language. This patient had a lesion in the area where the left parietal and temporal lobes meet, now known as Wernicke's area. These cases strongly supported the localizationists views, because a lesion caused a specific behavioral change in both of these patients.
In 1870, German physicians Eduard Hitzig and Gustav Fritsch published their findings about the behavior of animals. Hitzig and Fritsch ran an electrical current through the cerebral cortex of a dog, causing the dog to produce characteristic movements based on where the current was applied. Since different areas produced different movements, the physicians concluded that behavior was rooted at the cellular level. German neuroanatomist Korbinian Brodmann used tissue staining techniques developed by Franz Nissl to see the different types of cells in the brain. Though this study, Brodmann concluded in 1909 that the human brain consisted of fifty-two distinct areas, now named Brodmann areas. Many of Brodmann's distinctions were very accurate, such as differentiating Brodmann area 17 from Brodmann area 18.

Cognitive neuroscience

Cognitive neuroscience
Cognitive neuroscience is an academic field concerned with the scientific study of biological mechanisms underlying cognition, with a specific focus on the neural substrates of mental processes and their behavioral manifestations. It addresses the questions of how psychological/cognitive functions are produced by the neural circuitry. Cognitive neuroscience is a branch of both psychology and neuroscience, unifying and overlapping with several sub-disciplines such as cognitive psychology, psychobiology and neurobiology. Before the advent of fMRI, cognitive neuroscience was called cognitive psychophysiology. Cognitive neuroscientists have a background in experimental psychology or neurobiology, but may spring from disciplines such as psychiatry, neurology, physics, linguistics and mathematics.
Methods employed in cognitive neuroscience include experimental paradigms from psychophysics and cognitive psychology, functional neuroimaging, electrophysiological studies of neural systems and, increasingly, cognitive genomics and behavioral genetics. Clinical studies in psychopathology in patients with cognitive deficits constitute an important aspect of cognitive neuroscience. The main theoretical approaches are computational neuroscience and the more traditional, descriptive cognitive psychology theories such as psychometrics.

Prevention

Prevention
Bugs are a consequence of the nature of human factors in the programming task. They arise from oversights made by computer programmers during design, coding and data entry. For example: In creating a relatively simple program to sort a list of words into alphabetical order, one's design might fail to consider what should happen when a word contains a hyphen. Perhaps, when converting the abstract design into the chosen programming language, one might inadvertently create an off-by-one error and fail to sort the last word in the list. Finally, when typing the resulting program into the computer, one might accidentally type a '<' where a '>' was intended, perhaps resulting in the words being sorted into reverse alphabetical order. More complex bugs can arise from unintended interactions between different parts of a computer program. This frequently occurs because computer programs can be complex - millions of lines long in some cases - often having been programmed by many people over a great length of time, so that programmers are unable to mentally track every possible way in which parts can interact. Another category of bug called a race condition comes about either when a process is running in more than one thread or two or more processes run simultaneously, and the exact order of execution of the critical sequences of code have not been properly synchronized.
The software industry has put much effort into finding methods for preventing programmers from inadvertently introducing bugs while writing software.[8][9] These include:

A software bug

A software bug
A software bug (or just "bug") is an error, flaw, mistake, failure, or fault in a computer program that prevents it from behaving as intended (e.g., producing an incorrect result). Most bugs arise from mistakes and errors made by people in either a program's source code or its design, and a few are caused by compilers producing incorrect code. A program that contains a large number of bugs, and/or bugs that seriously interfere with its functionality, is said to be buggy. Reports detailing bugs in a program are commonly known as bug reports, fault reports, problem reports, trouble reports, change requests, and so forth.

Philosophy

Philosophy
Philosophy is the discipline concerned with questions of how one should live (ethics); what sorts of things exist and what are their essential natures (metaphysics); what counts as genuine knowledge (epistemology); and the reasons why humans create & consider things beautiful (aesthetics).[1][2] The word is of Greek origin: φιλοσοφία (philosophía), meaning love of wisdom or knowledge.[3]

A mnemonic

A mnemonic
A mnemonic (pronounced /nəˈmɒnɪk/) is a memory aid. Mnemonics are often verbal, something such as a very short poem or a special word used to help a person remember something, particularly lists. Mnemonics rely not only on repetition to remember facts, but also on associations between easy-to-remember constructs and lists of data, based on the principle that the human mind much more easily remembers insignificant data attached to spatial, personal, or otherwise meaningful information than that occurring in meaningless sequences. The sequences must make sense though; if a random mnemonic is made up, it is not necessarily a memory aid.[citation needed]
The word mnemonic is derived from the Ancient Greek word μνημονικός mnemonikos ("of memory") and is related to Mnemosyne ("remembrance"), the name of the goddess of memory in Greek mythology. Both of these words refer back to μνημα mnema ("remembrance").[1] The first known reference to mnemonics is the method of loci described in Cicero's De Oratore.
The major assumption is that there are two sorts of memory: the "natural" memory and the "artificial" memory. The former is inborn, and is the one that everyone uses every day. The artificial memory is one that is trained through learning and practicing a variety of mnemonic techniques. The latter can be used to perform feats of memory that are quite extraordinary, impossible to carry out using the natural memory alone.