We are instinctual, we aint that popular but we are tormented.. We give a fuck..

 

fuckyeahmolecularbiology:

The Bionic Ear

The cochlear implant, or bionic ear, is used to provide hearing in patients that are deaf due to damage to sensory hair cells in their cochlea. In those patients, they can often enable sufficient hearing to allow unaided understanding of speech. The quality of sound is different from natural hearing, with less sound information being received and processed by the brain. However, many patients are able to hear and understand speech and environmental sounds. Newer devices and processing strategies allow recipients to hear better in noise, enjoy music, and even use their implant processors while swimming.

Cochlear implants work through “cutting out the middle man.” Instead of using the stereocilia to transmit sound via gated ion channels, the implant will Fourier transform the incoming vibrations and feed the transformed signals directly to the auditory nerve for brain processing.

All About Neutrinos What is this thing, anyways? Neutrinos are subatomic particles produced by
the decay of radioactive elements and are
elementary particles that lack an electric charge,
or, as F. Reines would say, “…the most tiny
quantity of reality ever imagined by a human
being”. “The name neutrino was coined by Enrico Fermi
as a word play on neutrone, the Italian name of
the neutron.”
Of all high-energy particles, only weakly
interacting neutrinos can directly convey
astronomical information from the edge of the universe - and from deep inside the most
cataclysmic high-energy processes and as far as
we know, there are three different types of
neutrinos, each type relating to a charged particle
as shown in the following table: Neutrino ve vµ vτ Charged Partner electron (e) muon (µ) tau (τ) Copiously produced in high-energy collisions,
travelling essentially at the speed of light, and
unaffected by magnetic fields, neutrinos meet the
basic requirements for astronomy. Their unique
advantage arises from a fundamental property:
they are affected only by the weakest of nature’s forces (but for gravity) and are therefore
essentially unabsorbed as they travel
cosmological distances between their origin and
us. Where are they coming from? From what we know today, a majority of the
neutrinos floating around were born around 15
billions years ago, soon after the birth of the
universe. Since this time, the universe has
continuously expanded and cooled, and
neutrinos have just kept on going. Theoretically, there are now so many neutrinos that they
constitute a cosmic background radiation whose
temperature is 1.9 degree Kelvin (-271.2 degree
Celsius). Other neutrinos are constantly being
produced from nuclear power stations, particle
accelerators, nuclear bombs, general atmospheric phenomenae, and during the births, collisions,
and deaths of stars, particularly the explosions of
supernovae. A little bit of history 1931 - Pauli presents hypothetical “neutron” particle 1934 - Fermi develops theory of weak interaction and baptizes the neutrino 1956 - First discovery of the neutrino by an experiment 1962 - Discovery of an other type of neutrino at Brookhaven National Lab 1968 - First experiment to detect electron neutrinos produced by the sun 1978 - Discovery of the tau lepton at Stanford Linear Accelerator Center, existence of tau
neutrino theorized 1983 - Kamiokande becomes operational 1985 - The IMB experiment & Russian team reports measurement of non-zero neutrino mass 1987 - Kamiokande and IMB detect simultaneous burst of neutrinos from Supernova 1989 - Kamiokande becomes second experiment detecting neutrinos from the sun and confirms
anomaly of finding only 1/3 expected rate 1990 - IMB confirms deficit of muon neutrino interactions 1991 - LEP experiments show that there are only three light neutrinos 1994 - First proclamation of possible neutrinos oscillations seen by LSND experiment 1995 - Missing solar neutrinos confirmed by GALLEX 1996 - AMANDA neutrino telescope observes neutrinos at the south pole 1998 - Super-Kamiokande collaboration announces evidence of non-zero neutrino mass The neutrino was first postulated in December,
1930 by Wolfgang Pauli to explain the energy
spectrum of beta decays, the decay of a neutron
into a proton and an electron. Pauli theorized that
an undetected particle was carrying away the
observed difference between the energy and angular momentum of the initial and final
particles. Because of their “ghostly” properties,
the first experimental detection of neutrinos had
to wait until about 25 years after they were first
discussed. In 1956 Clyde Cowan, Frederick
Reines, F. B. Harrison, H. W. Kruse, and A. D. McGuire published the article “Detection of the
Free Neutrino: a Confirmation” in Science, a result
that was rewarded with the 1995 Nobel Prize. In 1962 Leon M. Lederman, Melvin Schwartz and
Jack Steinberger showed that more than one
type of neutrino exists by first detecting
interactions of the muon neutrino. When a third
type of lepton, the tau, was discovered in 1975 at
the Stanford Linear Accelerator, it too was expected to have an associated neutrino. First
evidence for this third neutrino type came from
the observation of missing energy and
momentum in tau decays analogous to the beta
decay that had led to the discovery of the
neutrino in the first place. The first detection of actual tau neutrino interactions was announced
in summer of 2000 by the DONUT collaboration at
Fermilab, making it the latest particle of the
Standard Model to have been directly observed. A practical method for investigating neutrino
masses (that is, flavour oscillation) was first
suggested by Bruno Pontecorvo in 1957 using an
analogy with the neutral kaon system; over the
subsequent 10 years he developed the
mathematical formalism and the modern formulation of vacuum oscillations. In 1985
Stanislav Mikheyev and Alexei Smirnov
(expanding on 1978 work by Lincoln
Wolfenstein) noted that flavour oscillations can
be modified when neutrinos propagate through
matter. This so-called MSW effect is important to understand neutrinos emitted by the Sun, which
pass through its dense atmosphere on their way
to detectors on Earth. Just passing through! It is the feeble interaction of neutrinos with
matter that makes them uniquely valuable as
astronomical messengers. Unlike photons or
charged particles, neutrinos can emerge from
deep inside their sources and travel across the
universe without interference. They are not deflected by interstellar magnetic fields and are
not absorbed by intervening matter. However,
this same trait makes cosmic neutrinos extremely
difficult to detect; immense instruments are
required to find them in sufficient numbers to
trace their origin. Neutrinos can interact via the neutral current
(involving the exchange of a Z boson) or charged
current (involving the exchange of a W boson)
weak interactions. In a neutral current interaction, the neutrino
leaves the detector after having transferred some
of its energy and momentum to a target particle.
All three neutrino flavors can participate
regardless of the neutrino energy. However, no
neutrino flavor information is left behind. In a charged current interaction, the neutrino
transforms into its partner lepton (electron, muon,
or tau). However, if the neutrino does not have
sufficient energy to create its heavier partner’s
mass, the charged current interaction is
unavailable to it. Solar and reactor neutrinos have enough energy to create electrons. Most
accelerator-based neutrino beams can also create
muons, and a few can create taus. A detector
which can distinguish among these leptons can
reveal the flavor of the incident neutrino in a
charged current interaction. Because the interaction involves the exchange of a charged
boson, the target particle also changes character
(e.g., neutron to proton). Butterfly Nets For Ghosts Many of the outstanding mysteries of
astrophysics may be hidden from our sight at all
wavelengths of the electromagnetic spectrum
because of absorption by matter and radiation
between us and the source. For example, the hot
dense regions that form the central engines of stars and galaxies are opaque to photons. In
other cases, such as supernova remnants, gamma
ray bursters, and active galaxies, all of which
may involve compact objects or black holes at
their cores, the precise origin of the high-energy
photons emerging from their surface regions is uncertain. Therefore, data obtained through a
variety of observational windows - and
especially through direct observations with
neutrinos - may be of cardinal importance. There
are methods which have been developed to
observe the elusive neutrino: 1. Reines and Cowan used two targets containing a
solution of cadmium chloride in water. Two
scintillation detectors were placed next to the
cadmium targets. Antineutrino charged current
interactions with the protons in the water
produced positrons and neutrons. The resulting positron annihilations with electrons created
photons with an energy of about 0.5 MeV. Pairs
of photons in coincidence could be detected by
the two scintillation detectors above and below
the target. The neutrons were captured by
cadmium nuclei resulting in gamma rays of about 8 MeV that were detected a few microseconds
after the photons from a positron annihilation
event. Today, the much larger KamLAND detector
uses similar techniques and 53 Japanese nuclear
power plants to study neutrino oscillation. 2. Chlorine detectors consist of a tank filled with
carbon tetrachloride. A neutrino converts a
chlorine atom into one of argon via the charged
current interaction. The fluid is periodically
purged with helium gas which would remove
the argon. The helium is then cooled to separate out the argon. A chlorine detector in the former
Homestake Mine near Lead, South Dakota,
containing 520 short tons (470 metric tons) of
fluid, made the first measurement of the deficit of
electron neutrinos from the sun (see solar
neutrino problem). A similar detector design uses a gallium to germanium transformation which is
sensitive to lower energy neutrinos. This latter
method is nicknamed the “Alsace-Lorraine”
technique because of the reaction sequence
(gallium-germanium-gallium) involved. These
chemical detection methods are useful only for counting neutrinos; no neutrino direction or
energy information is available. 3. “Ring-imaging” detectors take advantage of
the Cherenkov light produced by charged
particles moving through a medium faster
than the speed of light in that medium. In
these detectors, a large volume of clear
material (e.g., water or ice) is surrounded by light-sensitive photomultiplier tubes. A
charged lepton produced with sufficient
energy creates Cherenkov light which leaves
a characteristic ring-like pattern of activity
on the array of photomultiplier tubes. This
pattern can be used to infer direction, energy, and (sometimes) flavor information about the
incident neutrino. Two water-filled detectors of this type
(Kamiokande and IMB) recorded the neutrino
burst from supernova 1987a. The largest such
detector is the water-filled Super-Kamiokande. IceCube and the AMANDA project take advantage
of this method on a much larger scale by using
ice instead of water; to facilitate this both are
constructed in Antarctica at the south pole, the
only place to find a chunk of ice big enough! The Sudbury Neutrino Observatory (SNO) uses
heavy water. In addition to the neutrino
interactions available in a regular water detector,
the deuterium in the heavy water can be broken
up by a neutrino. The resulting free neutron is
subsequently captured, releasing a burst of gamma rays which are detected. All three
neutrino flavors participate equally in this
dissociation reaction. The MiniBooNE detector employs pure mineral oil
as its detection medium. Mineral oil is a natural
scintillator, so charged particles without sufficient
energy to produce Cherenkov light can still
produce scintillation light. This allows low energy
muons and protons, invisible in water, to be detected. 4. Tracking calorimeters such as the MINOS
detectors use alternating planes of absorber
material and detector material. The absorber
planes provide detector mass while the detector
planes provide the tracking information. Steel is a
popular absorber choice, being relatively dense and inexpensive and having the advantage that
it can be magnetised. 5. The Nova proposal suggests the use of particle
board as a cheap way of getting a large amount
of less dense mass. The active detector is often
liquid or plastic scintillator, read out with
photomultiplier tubes, although various kinds of
ionisation chambers have also been used. Tracking calorimeters are only useful for high
energy (GeV range) neutrinos. At these energies,
neutral current interactions appear as a shower
of hadronic debris and charged current
interactions are identified by the presence of the
charged lepton’s track (possibly alongside some form of hadronic debris.) A muon produced in a
charged current interaction leaves a long
penetrating track and is easy to spot. The length
of this muon track and its curvature in the magnetic field provide energy and charge ( μ + versus μ- ) information. An electron in the detector produces an electromagnetic shower
which can be distinguished from hadronic
showers if the granularity of the active detector
is small compared to the physical extent of the
shower. Tau leptons decay essentially
immediately to either pions or another charged lepton, and can’t be observed directly in this kind
of detector. (To directly observe taus, one
typically looks for a kink in tracks in
photographic emulsion.) 6. Most neutrino experiments must address the flux
of cosmic rays that bombard the earth’s surface.
The higher energy (>50 MeV or so) neutrino
experiments often cover or surround the primary
detector with a “veto” detector which reveals
when a cosmic ray passes into the primary detector, allowing the corresponding activity in
the primary detector to be ignored (“vetoed”). For
lower energy experiments, the cosmic rays are
not directly the problem. Instead, the spallation
neutrons and radioisotopes produced by the
cosmic rays may mimic the desired physics signals. For these experiments, the solution is to
locate the detector deep underground so that the
earth above can reduce the cosmic ray rate to
tolerable levels.

fuckyeahmolecularbiology:

Ancient Viral DNA in the Human Genome
Traces of ancient viruses, which infected our ancestors millions of years ago, are more widespread throughout our modern-day genomes than was previously thought.
The new research sheds light on the origins of a large proportion of our genetic material, much of which is still not understood. Only about 1.5% of human genes code for a useful protein product, that we’ve discovered; half of the rest is labeled “junk DNA” - although new research indicates it has a variety of purposes - while the other half was introduced by viruses or parasites, like the ancient ones studied here.
The senior author on the study (published in the Proceedings of National Academy of Sciences), Dr. Robert Belshaw from Oxford University’s Zoology Department, said: “This is the story of an epidemic within every animal’s genome, a story which has been going on for 100 million years and which continues today.
“Much of the dark matter in our genome plays by its own rules, in the same way as an epidemic of an infectious disease but operating over millions of years.
Learning the rules of this ancient game will help us understand their role in health and disease.”
Not only do these viruses exist within our genomes, but some of are even helpful, Dr. Belshaw insists. Protein syncytin, for example - derived from a virus - helps develop the human placenta.
The study believes that unlocking the secrets of these viruses is essential to understanding the full complexity of the human genome.
Image: SEM image of Influenza A virus.

fuckyeahmolecularbiology:

Ancient Viral DNA in the Human Genome

Traces of ancient viruses, which infected our ancestors millions of years ago, are more widespread throughout our modern-day genomes than was previously thought.

The new research sheds light on the origins of a large proportion of our genetic material, much of which is still not understood. Only about 1.5% of human genes code for a useful protein product, that we’ve discovered; half of the rest is labeled “junk DNA” - although new research indicates it has a variety of purposes - while the other half was introduced by viruses or parasites, like the ancient ones studied here.

The senior author on the study (published in the Proceedings of National Academy of Sciences), Dr. Robert Belshaw from Oxford University’s Zoology Department, said: “This is the story of an epidemic within every animal’s genome, a story which has been going on for 100 million years and which continues today.

“Much of the dark matter in our genome plays by its own rules, in the same way as an epidemic of an infectious disease but operating over millions of years.

Learning the rules of this ancient game will help us understand their role in health and disease.”

Not only do these viruses exist within our genomes, but some of are even helpful, Dr. Belshaw insists. Protein syncytin, for example - derived from a virus - helps develop the human placenta.

The study believes that unlocking the secrets of these viruses is essential to understanding the full complexity of the human genome.

Image: SEM image of Influenza A virus.

fuckyeahmolecularbiology:

Super Synthetic: Why Your Cells Might Be Smarter Than Your Calculator
Synthetic biologists have programmed a mammalian cell to calculate basic logical operations thanks to a highly complex artificial gene network.
A team of researchers from ETH Zurich headed by Martin Fussenegger, a professor of biotechnology and bioengineering at ETH Zurich’s Department of Biosystems in Basel, has constructed a gene network that can perform logical operations and, as a result, initiate specific metabolic steps. “We have developed the first real cellular calculator,” says Fussenegger.
Using biological components, the researchers developed a set of different elements that can be interconnected in different combinations and subsequently perform logical operations. These circuit elements, which are known as “logic gates” in the jargon, use the apple molecule phloretin and the antibiotic erythromycin as input signals. The calculations performed are based on Boolean logic.
“By combining several logic gates, we have achieved an unprecedented level of complexity in the synthetic gene network in cells,” stresses Professor Fussenegger. Even more remarkably, the bio-computer can process two different input and output signals in parallel. This sets the bio-computer apart from digital electronics, as this only works with electrons.
“Of course, our cell calculator is nowhere near as efficient as a PC,” says the ETH-Zurich professor. “By nature, however, a cell can process many different metabolic products in parallel.”
By combining and interconnecting several logic gates, the biotechnologists ultimately obtained a “half-adder” and “half-subtractor”: Both central circuit elements in computer technology. A half-adder is a basic digital circuit that adds up two binary numbers; the half-subtractor, on the other hand, deducts them. These two elements are found in every digital calculator, where they perform most calculations. In cell-structure experiments, the two bio-computer components produced solid results.
Scientists hope that these “cell calculators” can be widely used for a variety of applications, including monitoring and regulating a patient’s metabolism to help with diseases like diabetes. While still a far cry from medical applications, Professor  Fussenegger is optimistic. “It’s [just] wonderful that a mammalian cell can calculate like that!” he beams.
For more about synthetic cells, read here and here.

fuckyeahmolecularbiology:

Super Synthetic: Why Your Cells Might Be Smarter Than Your Calculator

Synthetic biologists have programmed a mammalian cell to calculate basic logical operations thanks to a highly complex artificial gene network.

A team of researchers from ETH Zurich headed by Martin Fussenegger, a professor of biotechnology and bioengineering at ETH Zurich’s Department of Biosystems in Basel, has constructed a gene network that can perform logical operations and, as a result, initiate specific metabolic steps. “We have developed the first real cellular calculator,” says Fussenegger.

Using biological components, the researchers developed a set of different elements that can be interconnected in different combinations and subsequently perform logical operations. These , which are known as “logic gates” in the jargon, use the apple molecule phloretin and the antibiotic erythromycin as input signals. The calculations performed are based on Boolean logic.

“By combining several , we have achieved an unprecedented level of complexity in the synthetic in ,” stresses Professor Fussenegger. Even more remarkably, the bio-computer can process two different input and output signals in parallel. This sets the bio-computer apart from digital electronics, as this only works with electrons.

“Of course, our cell calculator is nowhere near as efficient as a PC,” says the ETH-Zurich professor. “By nature, however, a cell can process many different metabolic products in parallel.”

By combining and interconnecting several logic gates, the biotechnologists ultimately obtained a “half-adder” and “half-subtractor”: Both central circuit elements in computer technology. A half-adder is a basic digital circuit that adds up two binary numbers; the half-subtractor, on the other hand, deducts them. These two elements are found in every digital calculator, where they perform most calculations. In cell-structure experiments, the two bio-computer components produced solid results.

Scientists hope that these “cell calculators” can be widely used for a variety of applications, including monitoring and regulating a patient’s metabolism to help with diseases like diabetes. While still a far cry from medical applications, Professor  Fussenegger is optimistic. “It’s [just] wonderful that a mammalian cell can calculate like that!” he beams.

For more about synthetic cells, read here and here.

A scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it.

Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts

Champagne, if you are seeking the truth, is better than a lie detector. It encourages a man to be expansive, even reckless, while lie detectors are only a challenge to tell lies successfully

kidsneedscience:

Known commonly as the June bug, June Beetle or less commonly as the May bug or beetle, the genus Phyllophaga is very large member (260 species) of the New World Scarab family (family in the general sense of the word, not taxonomic) of insects, belonging to the order of insects known as coleoptera, for the protected wing coverings.

The name phyllophaga however comes from the Ancient Greek words phyllon  (φυλλον) meaning leaf and phagos (φαγος) in the feminine form phaga meaning to eat.  These are both good words to know for scientific word building-both show up often in various forms as both prefix and suffix:  chlorophyll, coprophage, etc. 

Image of Emerald June bug by peppergrass, used with permission under a Creative Commons 3.0 license. 

Image of June bug by cotinis, used with permission under a Creative Commons 3.0 license.

ikenbot:

What Is Sand Made Of?
Image: Magnified sand under 250x microscope
Simply put, sand is made of tiny particles of worn-down rock. These particles are picked up by wind, water or the ice in glaciers and left as sediment in the ocean or as sand dunes on land.
The composition of sand varies and depends on the local rocks, but the most common material is silica — more often known as quartz. Coral, lava rock and gypsum are other materials often found in sand. The size and texture of sand particles varies and can offer insight into where it came from. A very small grain of sand, for example, is easier for the wind to blow around and may have traveled a long distance. The roundness of the sand may provide a clue as to how it was formed. Bodies of water with strong bottom currents produced different particles of sand than the particles produced by sand that is transported by rivers or streams, which tend to be very round. The International Sand Collectors Society offers a chart with size classes (in millimeters) for sand and mud [source: Sand Collectors]
Sand dunes form when a lot of loose sand is in an area that also has little vegetation to stand in the way. With enough wind and some sort of obstacle to serve as a sort of blocking or gathering point for the blowing sand, the particles gather and form a dune. Sand dunes reproduce when two crescent-shaped dunes collide, thanks to a little encouragement from their matchmaker friend the wind. When a small dune runs into a larger one — a very slow process that can take as long as a year — the smaller one can pass through it. If the sand dune is unstable, the horns at each end of the crescent shape will break off and become two even smaller dunes. Researchers refer to this process as “breeding.”
The tallest sand dunes in North America are at the base of the Sangre de Cristo Mountains in Colorado. Visitors to the Great Sand Dunes National Park can see how the massive dunes formed from sediments deposited in a deep valley. Scientists have discovered that a huge lake probably once covered the valley and receded from climate change. The large sheet of sand blew with the southwest wind accumulated into a natural pocket formed by a combination of three mountain passes. Opposing wind directions helped create the vertical shape of the dunes [source: National Park Service].

ikenbot:

What Is Sand Made Of?

Image: Magnified sand under 250x microscope

Simply put, sand is made of tiny particles of worn-down rock. These particles are picked up by wind, water or the ice in glaciers and left as sediment in the ocean or as sand dunes on land.

The composition of sand varies and depends on the local rocks, but the most common material is silica — more often known as quartz. Coral, lava rock and gypsum are other materials often found in sand. The size and texture of sand particles varies and can offer insight into where it came from. A very small grain of sand, for example, is easier for the wind to blow around and may have traveled a long distance. The roundness of the sand may provide a clue as to how it was formed. Bodies of water with strong bottom currents produced different particles of sand than the particles produced by sand that is transported by rivers or streams, which tend to be very round. The International Sand Collectors Society offers a chart with size classes (in millimeters) for sand and mud [source: Sand Collectors]

Sand dunes form when a lot of loose sand is in an area that also has little vegetation to stand in the way. With enough wind and some sort of obstacle to serve as a sort of blocking or gathering point for the blowing sand, the particles gather and form a dune. Sand dunes reproduce when two crescent-shaped dunes collide, thanks to a little encouragement from their matchmaker friend the wind. When a small dune runs into a larger one — a very slow process that can take as long as a year — the smaller one can pass through it. If the sand dune is unstable, the horns at each end of the crescent shape will break off and become two even smaller dunes. Researchers refer to this process as “breeding.”

The tallest sand dunes in North America are at the base of the Sangre de Cristo Mountains in Colorado. Visitors to the Great Sand Dunes National Park can see how the massive dunes formed from sediments deposited in a deep valley. Scientists have discovered that a huge lake probably once covered the valley and receded from climate change. The large sheet of sand blew with the southwest wind accumulated into a natural pocket formed by a combination of three mountain passes. Opposing wind directions helped create the vertical shape of the dunes [source: National Park Service].

ucsdhealthsciences:

How Infectious Disease May Have Shaped Human Origins Inactivation of two genes may have allowed escape from bacterial pathogens, researchers say
Roughly 100,000 years ago, human evolution reached a mysterious bottleneck: Our ancestors had been reduced to perhaps five to ten thousand individuals living in Africa. In time, “behaviorally modern” humans would emerge from this population, expanding dramatically in both number and range, and replacing all other co-existing evolutionary cousins, such as the Neanderthals.
The cause of the bottleneck remains unsolved, with proposed answers ranging from gene mutations to cultural developments like language to climate-altering events, among them a massive volcanic eruption.
Add another possible factor: infectious disease.
In a paper published in the June 4, 2012 online Early Edition of The Proceedings of the National Academy of Sciences, an international team of researchers, led by scientists at the University of California, San Diego School of Medicine, suggest that inactivation of two specific genes related to the immune system may have conferred selected ancestors of modern humans with improved protection from some pathogenic bacterial strains, such as Escherichia coli K1 and Group B Streptococci, the leading causes of sepsis and meningitis in human fetuses, newborns and infants.  
“In a small, restricted population, a single mutation can have a big effect, a rare allele can get to high frequency,” said senior author Ajit Varki, MD, professor of medicine and cellular and molecular medicine and co-director of the Center for Academic Research and Training in Anthropogeny at UC San Diego. “We’ve found two genes that are non-functional in humans, but not in related primates, which could have been targets for bacterial pathogens particularly lethal to newborns and infants. Killing the very young can have a major impact upon reproductive fitness. Species survival can then depend upon either resisting the pathogen or on eliminating the target proteins it uses to gain the upper hand.” More here
In the above photo, Escherichia coli bacteria, like these in a false-color scanning electron micrograph by Thomas Deerinck at UC San Diego’s National Center for Microscopy and Imaging Research, cause a variety of often life-threatening conditions, particularly among the young. Varki and colleagues suggest a genetic change 100,000 or so years ago conferred improved protection from these microbes, and likely altered human evolutionary development.

ucsdhealthsciences:

How Infectious Disease May Have Shaped Human Origins
Inactivation of two genes may have allowed escape from bacterial pathogens, researchers say

Roughly 100,000 years ago, human evolution reached a mysterious bottleneck: Our ancestors had been reduced to perhaps five to ten thousand individuals living in Africa. In time, “behaviorally modern” humans would emerge from this population, expanding dramatically in both number and range, and replacing all other co-existing evolutionary cousins, such as the Neanderthals.

The cause of the bottleneck remains unsolved, with proposed answers ranging from gene mutations to cultural developments like language to climate-altering events, among them a massive volcanic eruption.

Add another possible factor: infectious disease.

In a paper published in the June 4, 2012 online Early Edition of The Proceedings of the National Academy of Sciences, an international team of researchers, led by scientists at the University of California, San Diego School of Medicine, suggest that inactivation of two specific genes related to the immune system may have conferred selected ancestors of modern humans with improved protection from some pathogenic bacterial strains, such as Escherichia coli K1 and Group B Streptococci, the leading causes of sepsis and meningitis in human fetuses, newborns and infants.  

“In a small, restricted population, a single mutation can have a big effect, a rare allele can get to high frequency,” said senior author Ajit Varki, MD, professor of medicine and cellular and molecular medicine and co-director of the Center for Academic Research and Training in Anthropogeny at UC San Diego. “We’ve found two genes that are non-functional in humans, but not in related primates, which could have been targets for bacterial pathogens particularly lethal to newborns and infants. Killing the very young can have a major impact upon reproductive fitness. Species survival can then depend upon either resisting the pathogen or on eliminating the target proteins it uses to gain the upper hand.” More here

In the above photo, Escherichia coli bacteria, like these in a false-color scanning electron micrograph by Thomas Deerinck at UC San Diego’s National Center for Microscopy and Imaging Research, cause a variety of often life-threatening conditions, particularly among the young. Varki and colleagues suggest a genetic change 100,000 or so years ago conferred improved protection from these microbes, and likely altered human evolutionary development.