Information

How does myopia actually work?

How does myopia actually work?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

This is potentially more or less a question about optics, but I've never been truly satisfied with any explanation that I've been given about Myopia. In fact, I almost can't even believe it. There are a million different versions of this diagram:

I understand that images of things could appear blurry when those things are focused in front of the retina, but then what discriminates between near and far things? Shouldn't everything be blurry? What exactly is different about this image when the thing being viewed is in close proximity to the eye?


The image you gave is not clear because it does not show the fact that the divergence pattern of light from a close object is different from the divergence pattern of light from a distant object.

Light rays from a close object (first image) diverge, and therefore they are in focus for the myopic eye.

Light rays from a distant object (second image) is approximately parallel, and therefore they are not in focus for the myopic eye. The lens corrects for this inability to focus and allows the distant object to become in focus.

Also note that the myopic person is still capable of focusing on close objects despite the lens because the lenses are still capable of dynamic adjustment, but less so than a person with perfect eyesight. You can test this by focusing on a near object, then putting on your glasses. The object should be initially out of focus, and then become in focus as the eye lenses adjust to the distance.

Myopia/presbyopia is fundamentally an issue of the focal range of the eye becoming limited to a specific range. If the ability of the eye lenses to dynamically adjust is further weakened, it may require progressive lenses or bifocals to correct the issue.


The Role of Atropine Eye Drops in Myopia Control

High myopia is a major cause of uncorrectable visual impairment. It imposes major challenges and costs for refractive correction, and for the treatment of associated pathological complications. In the last 60 years, there has been a marked increase in the prevalence of high myopia in younger generations in developed countries in East and Southeast Asia, and there are signs of similar, but less pronounced increases in North America and Europe. In some parts of the world, 70-90% of children completing high schools are now myopic, and as many as 20% may be highly myopic. It is now clear that myopia results from excessive axial elongation of the eye, and this greater rate of axial elongation appears to be environmentally driven. Experimental studies have examined the biochemical mechanisms involved in regulation of axial elongation and, from these studies, some options have emerged for preventing the development of myopia or slowing myopia progression. Atropine eye drops have been quite extensively used in clinical practice in Asian countries. This long-lasting treatment could be beneficial, but has clear limitations and complications. Recent reports suggest that a low concentration of atropine, which has less severe side-effects, is also effective. But, a decision to use an invasive treatment such as atropine drops, even at low doses, requires careful consideration of the risk of myopia progression. A decision to use atropine in pre-myopic patients would require even more careful consideration of the risks. Here, we review the current literature relevant to the prevention of myopia progression with atropine drops.


Macular Degeneration and AREDS 2 Supplements

Four years ago I wrote about the premature marketing of a diet supplement for macular degeneration before the results of a trial to test it were available. Now that we know the results of that trial, a follow-up post is in order.

Age-related macular degeneration (AMD) is a leading cause of blindness. The incidence increases with age it affects 10% of people by age 66-74 and 30% of people by age 75-85. There are known risk factors including genetics and smoking, but there is no effective prevention. There are multiple diet supplement products on the market that are advertised as “supporting eye health.” Some are based on evidence from randomized, controlled studies but the advertising hype goes beyond the evidence and tends to mislead consumers. There is evidence that supplementation may slow the progression of moderate to severe AMD, but there is no evidence that supplements are effective in milder disease or for preventing AMD from developing in the first place.

The AREDS trial

The original AREDS (Age-Related Eye Disease Study), was a large, multicenter trial of patients with established AMD to evaluate the effect of a combination of 3 antioxidants (vitamin C, 500 mg vitamin E, 400 IU and vitamin A in the form of beta carotene, 25,000 IU), with zinc and copper. Over 5 years, patients taking the antioxidant and zinc supplement had a 23% chance of developing vision loss from advanced AMD compared to a 29% chance of developing vision loss from advanced AMD for patients taking a placebo pill. This effect was statistically significant, but modest. Concerns were raised about the high dose of vitamin A, since beta carotene was known to cause harm at those levels, so another study was designed to evaluate a different combination that omitted the vitamin A and added other possibly beneficial components.

The AREDS 2 trial

In the new trial, there were four groups: (1) a control group got the original AREDS formula, and the other 3 got a formula that omitted the vitamin A and added (2) lutein and zeaxanthin, (3) the omega 3s DHA and EPA, and (4) both lutein/zeaxanthin and DHA/EPA. There was no control group of patients not taking any supplement.

Addition of lutein + zeaxanthin, DHA + EPA, or both to the AREDS formulation in primary analyses did not further reduce risk of progression to advanced AMD. However, because of potential increased incidence of lung cancer in former smokers, lutein + zeaxanthin could be an appropriate carotenoid substitute in the AREDS formulation.

AREDS 2 had a number of strengths. It was large (4,203 subjects), it lasted 5 years during which 1940 eyes progressed to advanced AMD, the drop-out rate was low, it assessed adherence, and it measured blood levels of the study nutrients.

It also had some weaknesses. They didn’t use a no-supplement control group because they assumed that the original AREDS had proved the benefit of supplementation. This is a bit worrisome, since we know it is risky to rely on a single study. The AREDS trial has not been replicated, and a Cochrane systematic review concluded:

People with AMD may experience delay in progression of the disease with antioxidant vitamin and mineral supplementation. This finding is drawn from one large trial conducted in a relatively well-nourished American population. The generalisability of these findings to other populations is not known. Although generally regarded as safe, vitamin supplements may have harmful effects. A systematic review of the evidence on harms of vitamin supplements is needed.

A personal note

I have macular degeneration that is due to myopia, not AMD. It has not yet caused any loss of visual acuity, but I am being monitored by a retinal specialist. A couple of years ago, she suggested I take a supplement based on AREDS 2, but with the addition of bilberry and omega 3’s, available directly from the manufacturer as a mail-order subscription service. She offered it more as an option than a recommendation, and she didn’t push it. I declined, because that particular formulation has not been studied, and because the AREDS evidence is only for AMD, not for myopic macular degeneration. I can understand her wanting to “do something,” and the rationale that it might help and couldn’t hurt, but that wasn’t enough to convince me.

Conclusion

The AREDS trial provided evidence that a mixture of diet supplements slowed the progression of moderate-to-advanced macular degeneration, and the AREDS 2 trial found that a safer formulation was equally effective. The evidence would be more convincing if there were confirmatory studies with no-supplement control groups. There is no evidence of benefit for patients with mild AMD, and no evidence that supplements can prevent AMD. Advertising that products “support eye health” is misleading.

I found it interesting that 50% of the AMD patients were smokers or former smokers, a much higher rate than in the general population. This suggests that smoking cessation might be as important or more important than any supplement (at least in well-nourished people), and it is an intervention that has many other health benefits.


Literally my dream come true! To be glasses free. On the endmyopia journey. Get your eyesight back naturally! What more could one want??

This is legit, I've experienced life-changing improvements within a few months of diligently following the method. It works the same no matter your age or your gender, and the community is absolutely lovely and supportive. Plenty of very educated and empowered people of all age groups, which made the whole thing more than just about improving eyesight! If you are interested in personal growth and willing to read the science and create your own objective narratives, you've come to the right place.


Bates Method Eye Exercises For Myopia

William Bates was a brilliant guy. Back in the 19th century he was way ahead in the optometry game, one of the first guys to hypothesize that close-up was a problem and that relaxing the eyes would help prevent increasing myopia.

19th century. We didn’t know much about biology back then, medical practices weren’t exactly brilliant (ice pick lobotomies, anyone?). But Bates was on the right track.

Sadly he p*ssed off the wrong people, and optometry went a different way.

The way of making profit selling you glasses for life, and not having you question what causes myopia at all.

That said, Bates is now used to promoted all kinds of eye exercises that either have nothing to do with Bates, or simply don’t do anything if you have multiple diopters of myopia and spend all day in front of screens.

“Bates Method” proponents, basically the flat earthers of myopia control.

I made a video about Bates Method eye exercises, one that pissed off the Bates-Flat-Earthers quite a bit:

Bates Method, fake, also doesn’t address causality.

There are plenty of these exercises that defy logic and having half a brain. Palming, maybe the most popular.


Does reading / close viewing actually affect or harm vision?

I was listening to a podcast about a doctor who treated myopia and he asserted that focusing on objects close to the face put strain on the eye and could damage the muscles that adjust the lens because from an evolutionary perspective our hunter/gatherer ancestors would be adapted for primarily long-distance vision to see prey or look for danger, and short-range vision would only be used intermittently rather than for hours at a time reading books or some such. it was also asserted that glasses prevented natural focusing of the eye and would, over time, make the eye weaker and inevitably lead to stronger prescriptions due to further atrophy of the eye muscles.

This all seems to make sense to me but I'm not a doctor. Is this largely nonsense or is there any real scientific support for this?

Edit: These are all great responses, thank you!

I couldn't find any research to back-up the doctor's claims regarding the pathophysiology of myopia progression from near work, but the actual relationship between near work and myopia is a hot debate. A review article [1] from 2012 summarises the results of key studies but the results are very inconclusive.

I found a couple of interesting papers published in well peer reviewed journals that came out after the publication of this review article. A 23 year follow‐up study published in Acta Ophthalmologica found that at least in females, a short reading distance predicted higher myopia in adulthood. [2] The other study published in JAMA Ophthalmology, there were no associations between myopia and near work. [3]

There are loads of other studies I can try and find but the gist of it really is that there is no solid evidence that the link actually exists in the first place.

[1] Pan, C. , Ramamurthy, D. and Saw, S. (2012), Worldwide prevalence and risk factors for myopia. Ophthalmic and Physiological Optics, 32: 3-16. doi:10.1111/j.1475-1313.2011.00884.x

[2] Pärssinen, O. , Kauppinen, M. and Viljanen, A. (2014), The progression of myopia from its onset at age 8–12 to adulthood and the influence of heredity and external factors on myopic progression. A 23‐year follow‐up study. Acta Ophthalmol, 92: 730-739. doi:10.1111/aos.12387

[3] Lu B, Congdon N, Liu X, et al. Associations Between Near Work, Outdoor Activity, and Myopia Among Adolescent Students in Rural China: The Xichang Pediatric Refractive Error Study Report No. 2. Arch Ophthalmol. 2009127(6):769–775. doi:10.1001/archophthalmol.2009.105

Attractive though the idea was, it did not hold up. In the early 2000s, when researchers started to look at specific behaviours, such as books read per week or hours spent reading or using a computer, none seemed to be a major contributor to myopia risk5. But another factor did. In 2007, Donald Mutti and his colleagues at the Ohio State University College of Optometry in Columbus reported the results of a study that tracked more than 500 eight- and nine-year-olds in California who started out with healthy vision6. The team examined how the children spent their days, and “sort of as an afterthought at the time, we asked about sports and outdoorsy stuff”, says Mutti.

It was a good thing they did. After five years, one in five of the children had developed myopia, and the only environmental factor that was strongly associated with risk was time spent outdoors6. “We thought it was an odd finding,” recalls Mutti, “but it just kept coming up as we did the analyses.” A year later, Rose and her colleagues arrived at much the same conclusion in Australia7. After studying more than 4,000 children at Sydney primary and secondary schools for three years, they found that children who spent less time outside were at greater risk of developing myopia.

But isn['t time outdoors generally inversely correlated with time spent looking at things up close such as computers or books? I mean, what are they doing inside? Just staring out the window all day?

It sounds like nonsense, and I can't find any literature that supports that eye strain can damage your eye muscles permanently.

Of course it's much more difficult to prove something CAN'T happen than to claim it CAN happen, but every medical source will tell you that most of what he's saying is myth.

As far as evolution is concerned, humans and monkeys use their eyes for more than just looking into the distance. The use of tools is one of the more significant differences between humans and most other animals, and most tools are used at arms length or closer. 1.76 million years is a long time to assume our eyes haven't evolved.

However, the claim that 'glasses make the eye weaker' is 100% bunk, and probably indicates he's trying to sell something. Glasses cannot make your eye weaker, nor can wearing glasses with the wrong prescription ruin your eyes. That's like saying if you blink too hard your eyelids will fall off.

That last sentence was gold.

All of what you said makes sense. I was told by an optometrist in my 20s that I will definitely be needing glasses at some stage, but she was reluctant to give me a prescription until after I hit 30, as glasses can accelerate the deterioration of your eyesight. That was quite a while ago now and I still don't wear glasses, although I really should.

A 2002 paper by Mutti et al in Investigative Opthalmology and Visual Science concluded that heredity is the most important factor associated with juvenile myopia, close viewing or reading having smaller independent contributions. In later life or young adulthood, studies such as Kinge et al (2001) have shown a significant relationship between reading scientific literature and myopia progression ( p≤0.001). Yes, there is correlation but we do not have conclusive evidence for causation.

The eye has muscles just behind the iris (coloured part of the eye) called the ciliary body. The ciliary msucles are attached to bungee cords (zonular fibres) which are attached to the human lens. As the ciliary msucle flexes, the zonular files tug on the lens which changes its shape. This change in shape of the lens is what allows us to focus up close, a process called accommodation. In hyperopes (opposite of myopes) it also helps them focus in the distance.

As a person ages, their ability to accommodate or focus at near reduces. This is separate to myopia. Instead it is called presbyopia. The process of presbyopia has not been well elucidated but there are generally two main theories, lenticular or extra-lenticular. Lenticular theories suggests hardening of the lens Extra-lenticular theories suggests dysfunction of the ciliary muscle and loss of elasiticy of zonular fibres. As a person age's their ability to change the shape of their lens therefore reduces, and they begin having trouble with near focus. This processes becomes manifest at around age 40 and continues until age 60, when there is no more accommodation remaining. Therefore reading prescriptions tend to increase from ages 40 to 60.

With presbyopia, adults over age 40 begin to feel that their reading vision is getting worse. Conversely, one may intepret it as reading making their near vision worse. In hyperopia which uses the same focusing power as accommodation, these individuals may feel that after reading their distance vision becomes worse. While presbyopia is natural age related change, anecdotally speaking, information may be conveyed such that parents may intepret this information as near work making their vision worse, therefore near work makes their child's vision worse. When in fact, presbyopia has nothing to do with children's issues - other accommodative issues affect children but that is a whole other topic.

I am not certain if wearing spectacles will cause prescriptions to worsen. Even if I was told that spectacles worsen the myopic prescription, I am not going to stop recommended spectacles. Without spectacles, how can we even expect the individual to see. Your eyes are naturally changing, prescriptions will change. Not wearing spectacles or wearing partial prescriptions, seeing the world blurry, just to maybe reduce the rate of change by a small amount doesn't make logical sense.

I feel that the podcaster may be trying to explain effects of presbyopia, in the wrong context (childhood), to make parents empathise with their child's changing vision. This is not correct, the science is not correct and is preying on parental emotions to sell a treatment which is non-evidence based and will not work. We can slow down myopia changes, but we cannot reverse or cure myopia.


Do You Need Glasses?

You squint your eyes, lean forward a bit, and close one eye. You open that eye, close the other, squint again, and pull your head back. No matter what you seem to do, the sign still looks blurry. Did someone make a blurry sign? Or might you need glasses? Lots of people have problems with vision that can be corrected with glasses or contact lenses. To get these, you should visit the eye doctor.

A typical eye chart. Click for more detail.

There are two types of vision experts that you can see to have your eyes checked. An optometrist has special training to examine eyes and can give prescriptions for glasses and contact lenses. An ophthalmologist is a doctor who, in addition to checking your eyes and giving prescriptions for glasses and contact lenses, can also perform surgeries to fix eyes that have been injured.

Some people go to an eye doctor each year to check that their eyes are healthy. During an exam, the eye doctor will usually ask a patient if they have any trouble seeing things up close, far away, and when it is dark. The doctor might also take a picture of the inside of a patient’s eye by having them look into some fancy binoculars. This picture is called a retinal scan and lets the eye doctor see the retina and check that the blood vessels and tissue look healthy. During the next part of the exam the patient will be asked to look at an eye chart and to cover one of their eyes to see how well they can read the letters with the uncovered eye. How does this let them know that the patient needs glasses?


What’s Biology Got to Do with It

Are biological and sociological accounts of human social behavior inevitably opposed?

Just over a decade ago, a group of self-described “biologically minded sociologists” established an American Sociological Association-sponsored section titled Evolution, Biology, and Society. Their two-fold claim (echoed for many years among anthropologists as well) is that 1) biology is emerging as the dominant science of the twenty-first century and 2) “biophobia” in the social sciences is getting in the way of more fully integrated theories and research in human social behavior.

Critics of this position don’t necessarily disagree with either claim instead, they point to the disproportionately high research investment in the biological and physical sciences relative to the social sciences. They charge that this funding imbalance is an indicator of a “biomania” that threatens to reduce complex social behavior to natural tendencies. The resulting individualistic models are then used as supporting evidence for increasingly austere neoliberal policies, not to mention homophobia, racism, sexism, and xenophobia.

Both perspectives have some traction. Recent advances in biological inquiries into human behavior—which include fields such as evolutionary biology and psychology, behavioral genetics, and bio-indicators (hormonal and neuroscience measures)—are significant. It is also true that corporations, politicians, and activists of all ilks strategically manipulate biological science to support their interests.

Paradoxically, there are countless examples that the logic of social construction has proliferated into everyday parlance at the same time, the luster and authority of biological science is a recurrent (and highly marketable) theme in popular culture. All of this signals new twists in the familiar nature versus nurture debate and leads us to ask, is a new bio-socio zeitgeist emerging?

For this Viewpoints, we asked five experts to comment on the biological turn in sociological research. Dalton Conley describes his path to appreciating the potential of integrating genomics into studies of social mobility. In stark contrast, Roger N. Lancaster lambasts the logic of genetic reductionism for explaining cultural institutions.

Alondra Nelson describes the case of the African Burial Ground project in which researchers at Howard University have creatively used genetic comparison to confer a humanizing social life (i.e., to infer ancestral associations and ethnic affiliations) on former slaves who were buried in New York City. This research not only departs from the either/or nature versus nurture debate, it is a notable corrective to the racist classification-based genetics that were typical in the nineteenth and twentieth centuries.

In another twist on the debate, Kristen Springer advocates for the integration of sociological reasoning and biological evidence to explain persistent patterns of gender differentiation. Karl Bryant continues this theme in his discussion of contemporary sociology students who have been schooled to dismiss biology in favor of a fuzzy form of social construction. Both Springer and Bryant contend that sociologists need to be more biologically literate, if only to effectively respond to the misattribution of biological determinants to social behavior. Furthermore, using biologist Anne Fausto-Sterling’s analogy to the helix, they note that biological evidence can be usefully integrated into sociological models for a fuller, more grounded explanation of human behavioral patterns.

  1. How I Became a Sociogenomicist, by Dalton Conley
  2. Cultural Institutions Do Not Reduce To Genes, by Roger N. Lancaster
  3. Genetic Ancestry Testing as an Ethnic Option, by Alondra Nelson
  4. How Biology Supports Gender as a Social Construction, by Kristen Springer
  5. Teaching the Nature-Nurture Debate, by Karl Bryant

How I Became a Sociogenomicist

by Sam Grindberg In 1997, I had recently completed my doctoral dissertation on the impact of family wealth on socioeconomic attainment and racial inequality in post-civil rights America. As I was turning my thesis into a book (Being Black, Living in the Red), I came across What Money Can’t Buy: Family Income and Children’s Life Chances by sociologist Susan Mayer of the University of Chicago Harris Graduate School of Public Policy Studies. Her book challenged my assumptions and forever altered my research trajectory.

In this clever volume, Mayer deployed a number of counterfactuals and natural experiments to show that the traditional estimates of the effect of income on children’s life chances have been grossly overstated. For example, she showed that a dollar from a transfer payment had little to no effect on children while a dollar from earnings had a much bigger effect—suggesting that it was the underlying attributes of the parents that led them to earn money that were having the positive effect, not the dollars per se. She also showed that additional income did not usually result in the purchase of goods or services that we would expect to improve the human capital or life chances of children. While there were certainly limitations to her work and some questionable assumptions in her models, she upended the world of poverty research as far as I was concerned.

While I went on to publish my book with the appropriate warnings against interpretation of my parental wealth “effects” as causal, the Mayer work sent me off in search of a correctly specified way to assess the impact of parental resources and family conditions on children’s outcomes. This journey led me first to econometrics and labor economics, which I viewed as well ahead of sociology in confronting the issue of endogeneity and selection bias.

Though I found difference-in-differences, instrumental variable, and regression discontinuity approaches helpful in generating more consistent estimates, such approaches all suffered from the limitation that the researcher had to take what she could get in terms of natural experiments. There is—as far as I know—no good instrumental variable for parental wealth, for example. There is no regression discontinuity for race. Even if we considered randomized controlled trials, there remained severe limits to the sort of factors that were adjustable and therefore able to be studied in a causal, counterfactual framework. To quote Penn sociologist Herb Smith, “Nobody denies that the moon causes the tides even though we can’t perform an experiment on it.”

Genetic Endowment &mdash The Lurking Variable

This frustration, in turn, led me to study genetics. The recent addition of genetic markers (single nucleotide polymorphisms or SNPs) to large datasets such as the Health and Retirement Study, the National Longitudinal Survey of Adolescent Health, and the Wisconsin Longitudinal Survey has opened up a new frontier for the social sciences. (Similar efforts are also underway in Europe, for example with the Biobank Project in the United Kingdom and large-scale genotyping of subjects at several European twin registries.) We now enjoy the possibility of directly confronting, measuring and controlling for one of the two main “lurking” variables that bias traditional models of socioeconomic attainment. That lurking variable is, of course, genetic endowment. (The other being the influence of cultural practices that are also transmitted across generations.)

Whether sociologists care one whit about how genetic endowments at conception matter for life chances, how they interact with social environment, and whether they are a random lottery at birth (or rather are socially structured by “tribe”), sociologists should at least want to include genotype in their models in order to better specify the social variables about which we do care. That is, by constructing and including genetic risk scores for outcomes, we can obtain correctly specified, unbiased parameter estimates for the variables (such as education, etc.) that typically interest social scientists.

Furthermore, we can then interact genetic propensity with exogenous environmental variables to go from the adage “a gene for aggression lands you in the board room if you are to the manor born but in prison if you’re from the ghetto” to a robust research agenda on GxE effects. Genotype, in fact, may be the prism by which we come to understand why some individuals react so differently to the same social stimulus—be that a poor neighborhood, a family breakup, or a learning intervention. Heterogeneous treatment effects abound in the social science literature perhaps innate disposition can provide a rational accounting of them.

In my view, this is potentially one of the next frontiers in stratification research: integrating the big data of genomics with established social scientific models of mobility—using econometric techniques to insure both genotype and environment are indeed exogenous. Rather than police disciplinary boundaries— debating whether it is the left-hand side variable or right-hand side measure (or both) that needs to be purely social to make research sociology—we should embrace any data and any causal pathways that explain the variance in outcomes about which we care. (Interestingly, sociologists seem much more comfortable with epigenetic models that seek to explain how the social environment affects gene expression it seems the biological is okay when it is the explicandum not the explicator.)

So far, in the emerging subfield of sociogenomics, the worst fears of a deterministic science of human behavior—á la The Bell Curve—have not come to pass. Scientists who have engaged with molecular genetic data have been responsible in their modeling approach and cautious in their claims. Let us hope that these norms take root and continue to guide our work.

Cultural Institutions Do Not Reduce To Genes

by Sam Grindberg The New York Times recently ran an opinion piece that I would have taken as an Onion parody had I not read it start to finish. Its title queries, “How Much Do Our Genes Influence Our Political Beliefs?” In it, Thomas Edsall airs research by three psychologists who found that identical (monozygotic) twins reared in different homes are more likely than fraternal (dizygotic) twins to share a tendency to obey traditional authorities. “Authoritarianism, religiousness and conservatism,” which the authors of the study call the “traditional moral values triad,” are “substantially influenced by genetic factors,” they conclude.

Edsall suggests that genetics, natural selection, and evolutionary psychology might answer a question that has plagued American politics since 1968: “Why do so many poor, working-class and lower-middle-class whites—many of them dependent for survival on government programs—vote for Republicans?” He helpfully points to the voting behavior of West Virginians in the last presidential election: the state’s median family income is 48th in the nation nearly 1 in 5 of the state’s residents receives food stamps nearly 1 in 4 will be on Medicaid as the Affordable Care Act takes effect and 16.4 percent are on disability (the largest percentage in the nation). Yet voters overwhelmingly rejected Obama for Romney.

“Why are we afraid of genetic research?” Edsall muses in conclusion. “To reject or demonize it, especially when exceptional advances in related fields are occurring at an accelerating rate, is to resort to a know-nothing defense.”

The Ill-Logic of Genetic Reductionism

Crowd-pleasing tales about human nature have found a permanent home in psychology, and such stories buttress too much pontification in what passes for a serious public sphere. Permit me to belabor the obvious ill-logic of this reasoning by working from a specific case to general principle.

First, West Virginia has not always been a conservative bastion. The state voted Republican only three times between 1932 and 1996—each time in a general Republican landslide: Eisenhower (1956), Nixon (1972), and Reagan (1984). Only Minnesota posts a similarly Democratic run over the same period. Moreover, West Virginia turned Republican much more slowly than other southern and border states—despite being much whiter (94 percent of the state is non-Hispanic white). No doubt a history of coalminer labor struggles embedded class consciousness in voters’ political reflexes for a long time.

What has happened post-1960s is that the center of liberalism’s gravity has shifted: away from bread-and-butter economic issues to forms of cultural liberalism that emphasize tolerance, diversity, and personal freedoms. Undoubtedly, the preponderantly Scotch-Irish voters of West Virginia love guns and God. Edsall understands this political shift and its implications quite well—far better than most political analysts. But the “Genes” column undermines any insights into the unraveling of the New Deal coalition by turning historical products like clannishness, loyalty, and respect for authority into the fixed biological traits of particular regional or ethnic groups.

Edsell’s reasoning does not withstand even the simplest of logical tests. Note that the descendants of Ulster Scots also are heavily settled in North Carolina and Maine. Obama polled poorly in the rural Piedmont but still carried the Tar Heel State once. He carried Maine twice—and Maine is even whiter than West Virginia. Note too, that Lowland Scotland and Northern England (original homelands of the Ulster Scots) are Labour strongholds, whereas in Northern Ireland, Protestant descendants of the Scots vote Unionist. In short, political consistency is lacking among genetic descendants across different regions. (And even if I were to give Edsall the benefit of the doubt, I would have thought that individualism, insularity, populism, and rebelliousness were more likely traits of Scotch-Irish culture in North America than obedience to authority).

Furthermore, we have seen no “exceptional advances” in fields that attempt to explain social facts by recourse to biological causes. We have seen instead the proliferation and institutionalization of just-so stories—accounts that sound scientific because they’ve got genes and numbers and even sometimes correlations—but that fail the standards of good scholarship and scientific inquiry. Twin studies have played a notorious part in this swindle. Identical twins reared separately have been found to drive the same car models, to smoke the same cigarette brands, and to masturbate over photos of construction workers. Only a rube would believe that there could be “a” “gene” that controls such behaviors. Beware, too, of single studies with small samples and results that lie close to the margin of error: these had established for many the probable existence of “a” “gay” “gene” in the 1990s. Those studies have proved notoriously difficult to replicate—never mind that they reduce a complex and historically constructed institution like homosexuality to a thingified chain of nucleotides.

Anthropologist Marshall Sahlins once quipped that a “theory ought to be judged as much by the ignorance it demands as by the knowledge it purports to afford.” Now, as then, the reduction of social facts to biological facts demands that we forget long histories of migrations, class struggles, ethnic resentments, and institutional changes—that is to say, almost everything that might make social facts intelligible.

Genetic Ancestry Testing as an Ethnic Option

Combating color-blind racism requires the restoration of color-vision—the return of visibility of inequality. In this “post-racial,” post-genomic moment, DNA offers the unique and paradoxical possibility of magnifying issues of inequality quite literally at a microscopic level.

Diasporic Blacks in the United States are engaged in a constellation of activities in which information about their ancestral origins—inferred with the aid of genetic analysis—is deployed to bring racism’s past and present into view. DNA is a contradictory lexicon for a political moment at which the moral language of social justice increasingly falls on unsympathetic ears.

In 1991 archeologists uncovered several graves on a plot in lower Manhattan. The unearthed burials turned out to be the “Negro Burying Ground,” a former municipal cemetery for the city’s enslaved African population. The rediscovery of this Colonial-era burial ground, with its promise of rare insight into the life and death of enslaved men and women in New York, was an occurrence of historical import.

Following exhumation, the contents of the gravesites were brought to the Lehman College (NY) laboratory for investigation, where the method of analysis consisted primarily of osteology—the scientific measurement of the skeletal remains—and the broad classification of them into several categories, including stature, age, sex, and race. This forensic approach was and remains standard practice among some physical anthropologists and was the perspective that the Lehman researchers brought to the project.

Other researchers, however, including those at Howard University—a historically Black university, found this strictly forensic mode of analysis and interpretation inadequate to the historical significance of the cemetery. Detractors of the Lehman approach, including influential African American physical anthropologist Michael Blakey, who would soon be appointed the project’s new director, contended that the Lehman approach was unduly preoccupied with gross racial classification of the sort also used for criminal justice purposes. He further maintained that this methodology reduced the individuals in the burials to “narrow typologies” and thinly “descriptive variables,” and thereby “disassociated” them from their “particular culture or history.”

Local activists felt similarly. A group who referred to themselves as the Descendants of the African Burial Ground, for example, expressed their opposition to any forensic analysis of the remains that would yield classification of them solely by “skin color” the activists argued that such an interpretation amounted to the “biological racing” of their ancestors’ remains. Together, the community and the Howard-based researchers pushed for interpretive approaches that would generate more than gross racial sorting of the remains.

Scholar Stephen Jay Gould and others have documented the comparative scientific “mismeasurement” of bodies, from lung capacity to crania to genes—with white bodies serving as the norm against which all others are measured—that has long been employed to advance deliberate and erroneous claims about Black inferiority. Against the backdrop of this bitter legacy of biological discrimination, supporters of the Howard researchers’ analytic method of interpretation sought to upend this history by using biometrics alongside other forms of both scientific and humanistic analysis in order to glean new information about the embodied experience of slavery as well as the particular African origins of some of the earliest Black Americans.

The eventual siting of the African Burial Ground research project at Howard, in response to pressure from the local community, marked not only a new temporary home for the remains while they were being analyzed but, moreover, a fundamental change in the framing of how and why the research was conducted. If the question undergirding the investigations of the Lehman lab could be summarized as “are these the bones of Blacks?” the Howard researchers, to the contrary, sought answers to a more extensive set of questions, including “what are the origins of the population, what was their physical quality of life, and what can the site reveal about the biological and cultural transition from African to African-American identities?”

Blakey’s team hoped to use the rediscovery of these rare remnants of Black colonial life as an opportunity to more fully detail knowledge about how those buried at the African cemetery in lower Manhattan lived and died. At the Howard lab, in other words, the research orientation was shifted from an epistemology of racial classification to an epistemology of ethnicity (and therefore, also ancestry).

Restoring Ethnic Options

According to historian Michael Gomez, the Africans brought to the Americas as slaves during the Middle Passage underwent a “transition, from a socially stratified, ethnically-based identity directly tied to a specific land, to an identity predicated on the concept of race.” In the process, they “exchanged their country marks”—their myriad ethnicities—for a generic, subordinate and collective racial category. Race would in subsequent years become a “master status” for African Americans, a caste location.

Underscoring Gomez’s point from a later historical vantage, sociologist Mary Waters notes that “Black Americans… are highly constrained to identify as blacks, without other options available to them, even when they believe or know that their forebears included many non-blacks.” Thus the opening up of long-denied “ethnic options” to African Americans promised by genetic inference was a potential social watershed.

The African Burial Ground project was among the earliest and most public uses of genetic comparison to infer ancestral associations and ethnic affiliations in the United States. This research and, African Ancestry, the commercial venture it launched, established the groundwork for the social life of DNA research on the recovered cemetery became a paradigm for how genetics could be used to constitute identity and reconstruct the past, for the circulation of genetic claims beyond the laboratory and the court of law.

The controversy that transpired over excavation methods and research priorities at the centuries’-old African Burial Ground reveals how genetics—despite its vexed trajectory in Black life— can become the building blocks for reconciliation projects that resuscitate public memory of chattel slavery, that shed light on its devastating effects into the present, and that may portend the future of American racial politics.

How Biology Supports Gender as a Social Construction

One of the first things asked of any new parent is whether the baby is a boy or a girl. “Gender-reveal” parties are an increasing fad involving events such as slicing a cake to find pink or blue underneath the frosting. Biological sex is now routinely determined in advance of birth either through chromosome tests or ultrasound. However, it isn’t the biology per se that people are interested in—as evidenced by the name “gender” reveal party not “sex” reveal party. Rather, biological identification (sex) signals how the new person will likely be treated and expected to act, and even what its life chances are.

Sociologists, as well as anthropologists and gender scholars, have long bristled at the conflation of sex (biology) and gender assignment (culture), especially to the extent that the latter continues to evoke an array of cultural myths about gender differences that are believed to be rooted in biology—such as men and women having different kinds of intelligence. A high profile example of this thinking is from the now decade-old statement by former Harvard president Lawrence Summers suggesting that fewer women are in engineering in large part because of “intrinsic” (read biological) aptitude differences in math.

Sociologists have worked diligently (and effectively) to demonstrate that biology is not destiny. However, this does not mean that biology is unimportant for sociological understandings of gender. Indeed,
based on my reading and research, I think there are many fruitful ways that biological research can illuminate our understanding of social patterns around gender, but not in the causal way presumed in much research as well as what passes for “common sense.” The challenge is not to throw out biology altogether, but how to integrate it more scientifically into socio-cultural studies of patterned inequalities, such as gender.

More Sameness Than Difference

One fruitful area in this direction is the study of sameness between men and women, rather than difference. The vast majority of biological research on sex and gender continues to focus on differences and to valorize differences as “findings” while dismissing similarities as non-findings. This form of scientific bias is itself a reflection of cultural myths about gender (we expect to see differences and we expect them to be biologically based). Many of these supposed differences do not actually meet scientific criteria for difference.

For example, in 2007, Harvard medical scholar Nikolaos Patsopoulos, and his colleagues published an article in the Journal of the American Medical Association based on their review and analysis of leading published research claiming different genetic effects across sexes. The authors determined that of the 432 sex-difference claims, the vast majority were not actually true and only one (yes, one!) of the 432 published sex-difference claims had strong internal validity and was sufficiently replicated. The focus (and publication of insufficiently substantiated reports) on difference obscures the tremendous similarity between men and women. Paying attention to the biological sameness of men and women rather than the difference illuminates just how much social and structural factors (not biological factors) create the idea of men and women as different.

Another way to incorporate biology into social scientific gender research is to examine the ways in which socio-cultural practices shape bodies, reversing the assumptions of biological essentialism that posit inherent differences between men and women. In other words, used well, biological research can actually support the logic of socially constructed and sustained patterns of difference and inequality.

Biologist and gender scholar Anne Fausto-Sterling is a pioneer in demonstrating the ways in which nature and nurture intertwine, like a helix, to shape what we come to see, socially, as two distinct genders (she posits at least five biologically based sexes).

According to Fausto-Sterling, social practices “shoehorn” bodies into culturally recognizable gender types. Her research on gendering bones is a beautiful illustration of this. Two concrete examples of how bones show gender are the social imperative for women to wear high heels and the historical practice of binding young women’s feet to prevent future foot growth. Both of these practices result in significant differences between men’s and women’s feet bones.

Another way that biology can help us understand how gender is experienced is through analyzing physiological markers as indicators of stress. For example, cortisol is a steroidal hormone that is colloquially referred to as a “stress hormone” because it is released in response to stress—and is particularly susceptible to social evaluation and social threats. Other cardiac and vascular measures can also be used to identify and assess the magnitude of gendered threats and stress.

For example, in work I am currently conducting, we explore how threatening someone’s masculinity leads to changes in cortisol and cardiac responses. Using physiological responses to understand these gendered social threats allows us to assess the salience of masculinity ideals even if respondents want to enact masculinity by self-reporting that they do not feel their masculinity is threatened.

As the biological research enterprise continues to expand, it is increasingly important that social scientists interested in gender are intimately and integrally involved in this research. Applying a socially informed perspective to biological projects can not only avoid the potential danger of biological essentialism but can actually illuminate gender as a socio-cultural patterned phenomenon.

Teaching the Nature-Nurture Debate

by Sam Grindberg Like many professors, on the first day of the semester I ask students why they’re taking my courses: What do they think it’s about and why are they interested in it? The responses vary, but frequently students reference some version of “social construction.” Their answers to my follow-up question, “what does social construction mean?” reveal a lot about how students understand the relationship between sociology and biology.

When I first started teaching just over a decade ago, I braced myself, figuring that students would be inclined to use biology to explain most social phenomena. My concern was heightened because my main research and teaching areas are sex and gender, areas of social life that have been particularly vulnerable to biological essentialism. Given the current cultural and scientific moment where we do things like map the human genome and look for psychopharmacological fixes for all sorts of problems of daily life, it comes as no surprise that we are constantly encouraged to see the world through this lens.

While I wasn’t wrong in my assumption that many students see biology as the “real” foundation for causal explanations of social behavior and institutions, I didn’t expect the many students who claim sociology as the righteous combatant poised to save us from pervasive forms of biological reductionism. Encountering students who seemed not only familiar with, but who embrace a social constructionist perspective was a pleasant surprise (my teaching mentors describe the dark ages in decades past of laboring for weeks on end just to drive home the idea of cultural relativity).

However, upon closer engagement, I’ve realized that the social constructionist perspective some of them espouse is more of a rote response than a thoughtfully articulated corrective to biological myopia their response to biological reductionism is sometimes a reductionist—or at least misguided—view of sociology, where definitions of “social construction” simply mean “not biology.”

A False Debate

Both scholarly and popular discourses often pit biology against sociology in the form of the “nature vs. nurture” debate, so it should be no surprise that students adopt this frame and take up either/or sides. But this is of course a false dichotomy and, therefore, a wrong-headed debate. I don’t want students (and others) to think they have to choose between sociology and biology, I want them to understand that the sources of phenomena we are interested in can be fully biological and fully sociological that there is a deep and crucial sociological analysis that can be applied to the current mania with biological understandings of the world and that causes are also about socially constructed and prescribed meanings, including the meanings we attach to biological “facts.”

Instead of wondering and debating what role biology can have in sociology, we need to become more engaged with research from the sociology of science, the sociology of knowledge, and the sociology of the body/embodiment, to name just a few simpatico areas. These fields of study bring a sociological analysis to biologically-dominated frameworks to fruitfully demonstrate both the culturally laden discourses that underlie unquestioned assumptions in biological research, and provide compelling examples of the interplay of bio-physiological and social factors in shaping human bodies and experiences. For example, in current debates about the pathology of homosexuality and gender nonconformity, so-called sympathizers often use discourses of science and biological imperative to normalize these behaviors, as if reducing them to natural tendencies somehow makes them acceptable.

In my research on the psychiatric diagnosis Gender Identity Disorder of Childhood (GIDC) I examine these “naturalizing = normalizing” tendencies. When exploring gender nonconformity with my students, I want them to be able to apply a sociological analysis that includes critical questioning of culturally informed scientific discourses. But I also want them to understand that we do have bodies and physical experiences, and what we need is a sociologically informed approach to biology that enables us to examine these experiences as an integration of body and culture.

By beginning to more centrally frame the discipline in terms of its conversation with biology, especially concerning the sociological analysis of biology that we have a long tradition of providing, we can more deeply and complexly engage students and others in ways relevant to dominant, but narrow, biological understandings of the contemporary social landscape.

Authors

Roger N. Lancaster is in anthropology and cultural studies at George Mason University. He is author of Sex Panic and the Punitive State and The Trouble with Nature.

Alondra Nelson teaches sociology and gender studies at Columbia University. She is coeditor of Genetics and the Unsettled Past: The Collision of DNA, Race, and History. This piece is adapted from her forthcoming book, The Social Life of DNA.

Kristen Springer is in the sociology department at Rutgers University. She studies the health effects of gendered norms and family dynamics among older adults.

Karl Bryant is in sociology and women’s, gender, and sexuality studies at SUNY New Paltz. He is currently studying the emergence of “transgender childhood” as an increasingly salient social subjectivity across multiple domains.


New genetic risk factors for myopia discovered

Myopia, also known as short-sightedness or near-sightedness, is the most common disorder affecting the eyesight and it is on the increase. The causes are both genetic and environmental. The Consortium for Refractive Error and Myopia (CREAM) has now made important progress towards understanding the mechanisms behind the development of the condition. This international group of researchers includes scientists involved in the Gutenberg Health Study of the University Medical Center of Johannes Gutenberg University Mainz (JGU). The team has uncovered nine new genetic risk factors which work together with education-related behavior as the most important environmental factor causing myopia to generate the disorder. The results of the study "Genome-wide joint meta-analyses of genetic main effects and interaction with education level identify additional loci for refractive error: The CREAM Consortium" have recently been published in the scientific journal Nature Communications.

There has been a massive rise in the prevalence of short-sightedness across the globe in recent decades and this upwards trend is continuing. It is known from previous studies of twins and families that the risk of acquiring short-sightedness is determined to a large extent by heredity. However, the myopia-causing genes that had been previously identified do not alone sufficiently explain the extent to which the condition is inherited. In addition to the genetic causes of myopia there are also environmental factors, the most significant of which are education-related behavior patterns. "We know from the Gutenberg Health Study conducted at Mainz that the number of years of education increases the risk of developing myopia," said Professor Norbert Pfeiffer, Director of the Department of Ophthalmology at the Mainz University Medical Center.

Meta-analysis of multi-national datasets

With the aim of identifying genetic mutations relating to myopia and acquiring better insight into the development of the condition, the international research group CREAM carried out a meta-analysis of data collected from around the world. The data compiled for this analysis originated from more than 50,000 participants who were analyzed in 34 studies. The second largest group of participants was formed by the more than 4,500 subjects of the Gutenberg Health Study of the Mainz University Medical Center. "In the field of genetic research, international cooperation is of particular importance. This is also borne out by this study, to which we were able to make a valuable contribution in the form of data from our Gutenberg Health Study," continued Professor Norbert Pfeiffer. "And in view of the fact that a survey undertaken by the European Eye Epidemiology Consortium with the help of the Gutenberg Health Study shows that about one third of the adult population of Europe is short-sighted, it is essential that we learn more about its causes in order to come up with possible approaches for future treatments."

Aware that environmental effects and hereditary factors reinforce one another in the development of myopia, the scientists devised a novel research concept for their investigations. They used a statistical analysis technique that takes into account both the effects of the environmental and hereditary factors and does so in equal measure and simultaneously. Their efforts were crowned with success as they were able to classify nine previously unknown genetic risk factors.

Risk-associated gene involved in the development of short-sightedness

These newly discovered genetic variants are associated with proteins which perform important functions when it comes to the transmission of signals in the eye. One of these genes is of particular interest because it plays a major role in the transmission of the neurotransmitter gamma-aminobutyric acid (GABA) in the eye. Previous studies have shown that there is greater activation of the gene in question in eyes that are myopic. The results of current research substantiate this conclusion. The CREAM researchers interpret this as evidence that this newly discovered risk-related gene is actually involved in the development of short-sightedness. This represents significant initial headway towards understanding how genetic causes interact with the level of education as an environmental factor to produce the heterogeneity of myopia. Further research will be needed to clarify the details of how the mechanisms actually work and interact with one another.

The spread of short-sightedness is a worldwide phenomenon. Particularly in South East Asia the incidence of myopia in school children has increased notably over the last decades. This is likely due to an improvement in educational attainment. People who read a great deal also perform a lot of close-up work, usually in poor levels of daylight. The eye adjusts to these visual habits and the eyeball becomes more elongated than normal as a result. But if it becomes too elongated, the cornea and lens focus the image just in front of the retina instead of on it so that distant objects appear blurry. The individual in question is then short-sighted.


Getting Stronger

Here is the video and slide set from my presentation at the Ancestral Health Symposium, August 9, 2014, in Berkeley, California. I enjoyed meeting many of you who were at the conference. I’d recommend watching the video first, and perhaps follow along with the uploaded slide set in a separate window, since it is more convenient for viewing references and other details.

(Note: You’ll notice some minor differences in the video and slide versions, as the AV team inadvertently projected an earlier draft rather then the final slides I had provided).

(Click on image below and allow 30-60 seconds for slideshow to upload)

Overview of the talk. For ease of reference, here is slide-by-slide “table-of-contents” summary of the presentation. People are always asking to provide a detailed explanation of exactly what steps to take to improve their vision. You’ll find this bottom line “practical advice” in Slides 23-36