Why are obese mice so easy to chase?

Dysfunctional signaling in the brain makes obese mice less active

Summary

Obesity is accompanied by a lack of motivation/desire to exercise. This has lead to the idea that lack of exercise leads to obesity. A new study challenges this by showing that both “lazy” and “active” mice gain weight on a fatty diet [1]. All mice on high fat diet become obese and then move around less than mice fed on standard chow. The researchers go on to show that the lack of motivation to exercise that accompanies obesity may well be brought about by neuronal changes in the regions of the mouse brain that respond to movement.

Introduction

The physical inactivity that accompanies obesity is frustrating for those wanting manage their own weight as well as those who want support a near and dear one who does [2,3]. A better understanding of where this seeming ‘lack of motivation’ to exercise comes from may help design better intervention strategies. Previous studies suggest that obese animals and humans may have defects in dopamine signaling in a region of the brain that controls movement behaviours with the result that they may find physical activity less rewarding [4,5]. However, does lack of exercise cause weight gain?

What did they do and find?

Mice were fed standard chow (lean) or high fat diet (obese) for 18 weeks. Mice become both obese and less active when fed a high fat diet. The researchers of this study wanted to understand if the mice became fat because they were less active. To their surprise they found that low activity and weight gain occurred hand in hand but were not cause and effect. The weight gain however was correlated to the high fat diet.

So what was causing the lower activity in obese mice? There is a bit of the brain called the striatum which is responsible for movement and is disrupted in disorders such as Parkinson’s. There are neurons in this region of the brain that are sensitive to the neurotransmitter dopamine and fire (get activated) during movement. The authors of this study reasoned that perhaps it is this region of the brain that is responsible for inactivity in obese mice.

First they looked for components of dopamine signaling, the levels of dopamine itself and the dopamine receptors which when present on neurons allows them to respond to dopamine. They found that the striatum of obese mice had Dopamine Receptors of a specific kind (D2R receptors) which showed decreased binding while the levels dopamine itself and the other receptor for dopamine was the same in lean and obese mice. This reduction in dopamine 2 Receptor binding did not correlate with weight gain but was correlated with movement loss.

So would lean mice also move less if they had lower binding D2Rs?

Indeed, in genetically modified mice that lacked D2R receptors in the striatal region – lower activity levels were observed even in lean mice. This showed that neuronal changes underlie lower activity levels in obese mice.

To probe this further the researchers measured the activity of neurons in the striatum by inserting an electrode in the brain of live obese and lean mice. These recordings showed that during movement there was less overall firing in the brain of obese mice.

In order to test if these brain regions and neurons were indeed responsible for the lower activity observed in obese mice, the researchers used a special set of mice. These mice are specially modified to express a molecule that is usually produced by active Dopamine signaling via D2R binding (Gi) coupled to an opiod receptor only in the neurons of the striata that naturally express D2R. This allows Gi to be uniquely switched on by use of a synthetic chemical (Salvinorin B). When Gi is artificially produced by the D2R expressing neurons of the striatum both lean mice and obese mice become more active.

Reducing the D2R levels artificially in the neurons of the striatum results in mice with lower activity levels however these mice were not more susceptible to weight gain. Nor are mice with low D2R binding in the beginning of the diet predisposed to weight gain.

Take homes from the study

Experiments on animal behaviour are difficult and sometimes hard to extend beyond specific cases because genetic and environmental effects play a large role in shaping observed behaviour and this study is no different. These data convincingly argue that in mice, obesity is accompanied by and not caused by lack of activity. It also gives us a perspective on how integrated an animal’s body and mind are. At the very least it makes us think that in combating obesity, a role for the mind cannot be ignored.

References

1. Basal Ganglia Dysfunction Contributes to Physical Inactivity in Obesity. Danielle M. Friend, Kavya Devarakonda, Timothy J. O’Neal, Miguel Skirzewski, Ioannis Papazoglou, Alanna R. Kaplan, Jeih-San Liow, Juen Guo, Sushil G. Rane, Marcelo Rubinstein, Veronica A. Alvarez, Kevin D. Hall, Alexxai V. Kravitz, Cell Metab. 2017 Feb 7

2. The mysterious case of the public health guideline that is (almost) entirely ignored: call for a research agenda on the causes of the extreme avoidance of physical activity in obesity. Ekkekakis P, Vazou S, Bixby WR, Georgiadis E, Obes Rev. 2016 Apr;17(4):313-29

3. Exercise does not feel the same when you are overweight: the impact of self-selected and imposed intensity on affect and exertion, P Ekkekakis and E Lind, International Journal of Obesity (2006) 30, 652–660

4. Reward mechanisms in obesity: new insights and future directions. Kenny PJ. Neuron. 2011 Feb 24;69(4):664-79.

5. Obesity and addiction: neurobiological overlaps (Is food addictive). Volkow ND, Wang GJ, Tomasi D, Baler RD. Obes Rev. 2013 Jan;14(1):2-18.

6. Do Dopaminergic Impairments Underlie Physical Inactivity in People with Obesity? Kravitz AV, O’Neal TJ, Friend DM, Front Hum Neurosci. 2016 Oct 14;10:514. eCollection 2016.

7. Increases in Physical Activity Result in Diminishing Increments in Daily Energy Expenditure in Mice. Timothy J. O’Neal,, Danielle M. Friend, Juen Guo, Kevin D. Hall, Alexxai V. Kravitz Curr Biol. 2017 Feb 6;27(3):423-430.

An Interview with Dr. Alexxai V. Kravitz

1. What is causing the change in dopamine signaling in the neurons responsive to movement in obese mice? Do you have more insights into this from your study of Parkinson’s?

This is a great question, but unfortunately one that we don’t know the answer to. Parkinson’s disease is caused by the death of neurons that make dopamine, and we looked at dopamine neurons in obese mice and learned that they were not dying. So in that way, the mechanism underlying the changes in dopamine signaling in obese mice is very different than with Parkinson’s disease. This is a good thing, as it would frankly be scary if a diet high in fat were causing the death of dopamine neurons! Instead, we observed dysfunction in a specific dopamine receptor (a protein that detects dopamine) in obese mice. We’re looking into what exactly is causing the dysfunction of this receptor, but unfortunately we do not currently know.

2. You data does show that mice become both obese and move less on high fat diet, but which bit convinces you that the “laziness” is because of the obesity? Can they not be two parallel outcomes of a high fat diet? If yes, then would a high fructose or high calorie diet lead to a similar outcome?

Let me clarify here – I don’t think the *weight* of the mice is causing the laziness, I believe dysfunction in their dopamine receptors is causing their laziness [More on this in Ref. 6]. And both this dysfunction and weight gain can be caused by the high fat diet. So in that say, yes, they can be two parallel outcomes of the high fat diet. To answer your second question, I’m not sure if other high calorie diets can cause the same dysfunction. This would be a great follow up experiment!

3. In your paper, you describe the limitations of human studies that have measured Dopamine signalling and its links to obesity. Can you tell us a bit more about what the challenges are?

To date there have been a handful of studies that have compared D2 receptor levels in people with obesity vs. normal weight, and a minority have reported dysfunction in D2Rs in people with obesity. It is not clear why some studies have reported lower levels of D2 receptors, while most have not. However, measuring dopamine receptor levels in humans is difficult. The only technique for measuring receptor levels in humans is PET scanning, a technique where a radioactive tracer is injected and the brain is scanned for the location where the tracer binds. If more tracer binds, it is assumed there are more “available” receptors in that brain area. However, this technique can be affected by many factors, including what other transmitters are bound to that receptor. If internal levels of dopamine are higher during the scan, for instance, the amount of a radio-tracer that binds to a dopamine D2 receptor will be lower. The complexity increases when we consider how many things can alter dopamine levels throughout the day, which include caffeine use, food intake, and sleep. These are some of the challenges that face clinical research. Animal studies are less likely to incur these sources of variance, and have more consistently reported decreases in D2 receptors in association with obesity.

4. Are the changes in the striatum reversible, by forced exercise for example or are there natural molecules that could restore Gi signaling?

There are no known ways to reverse these changes, but there is also very little research on this. There is a small amount of evidence in rats that forced exercise increases D2 receptor levels, but this is very preliminary and has not been replicated, nor studied in humans. This idea of how to alter D2 receptor levels is an extremely important concept for future research!

5. Are there common themes about obesity and lower activity levels that have emerged from animal studies and how would you extend them, if at al, to humans? For instance, you say mice and rats are different, then would you expect people to be more similar to mice than rats? Why?

It is very difficult to extend results from mice to humans, so I will be cautious on this one. However, there are some concepts from animal work that are relevant to humans. Many researchers have noted that animals voluntarily over-eat high fat diets, and that this leads to weight gain and obesity. While the specific macronutrient (fat vs. carbs vs. protein) content of human diets is the subject of a lot of debate when it comes to human obesity, it is fair to say that diets that induce over-eating will lead to obesity. Typically, foods that induce over-eating are highly palatable, such as junk foods that pack large numbers of calories into small volumes. While people are all different from one another, understanding the foods that a specific person overeats will inform what is likely to cause that person to gain weight.

As another concept that I believe is relevant to human s, in our study we reported that physical inactivity did not correlate with weight gain in mice. That is, we examined inactive mice that lacked D2 receptors, and found that they gained weight at the same rate as normal active mice. We also examined the natural variation of activity levels of normal mice and did not note any relationship here either. This seems to counter the conventional wisdom that inactivity should cause weight gain. However, this conventional wisdom is based largely on correlations between obesity and inactivity, rather than causal tests of this hypothesis. We all know that correlation does not imply causation, but it is very easy to get caught in this trap. In fact, in causal tests, the contribution of exercise alone (without changes in diet) to weight loss in humans is fairly small, generally resulting in 3-5 pounds of weight loss over the first year. This is consistent with our conclusions in mice. Studies in mice can help us understand at a mechanistic level why changes in activity (both increases and decreases) don’t translate into large changes in body weight [More on this in Ref.7].

6. In their natural habitats animals such as mice and rat consume high fat diets. Do you think your results would hold in wild rodents instead of lab reared ones, especially if they were allowed to interact freely with each other and the environment?

Wow, what a great question! We use lab mice, which have been bred in captivity for many decades. This is somewhat similar to studying domesticated dogs vs. wild dogs. And in many ways, our laboratory mice are quite different from wild mice. However, I believe that even wild mice would become inactive on a high fat diet. The association between obesity and inactivity has been seen in many species including humans adults, children, non-human primates, domesticated cats and dogs, rats, and mice. When an association occurs across so many species of animals, I think it is likely that it would extend to wild mice as well as laboratory mice. This would be a great student project to find some wild mice and test!

Modifying plants for human consumption

– the story of a CRISPR salad

Modifying plants to feed the world

How are we going to feed the 7.45 billion people in the world? Many people including scientists believe that genetically modified plants are one route to food security (1).  However, we as a species and society remain largely afraid of genetically modified organisms, perhaps, because of the seeming “unnaturalness” of it all.

A new technology now allows scientists to modify plants and other organism in a more “nature-identical” way. In fact, plants produced by this method are so indistinguishable(genetically) from naturally occurring varieties that no one an tell the difference. Therefore, the Swedish agricultural board and the United States Department of Agriculture (for mushrooms, fungi not ”technically” plants) have set the precedent by clearing seeds and mushrooms produced by this method for production and consumption.

The method used to produce these is called the CRIPR-Cas9 system (Watch Carl ZImmer’s  video on CRISPR here). It can be used to remove bits of DNA from an organism’s genome in a targeted manner.  Think of it like what the invention of the engine was for transportation. Scientists all over the world are using this metaphorical scissor to snip out pieces of DNA exploring function and consequences, in an effort to understand ourselves, the living world around us, and curing mice of rare mice diseases (a favourite among us lab rats!) (2-5). Why not make varieties of plants that are better suited for human consumption using this method?

Undoubtedly, there are serious and complex socio-economic issues around the use and misuse of plant genetic engineering, but the discussion has often focussed on and suggested that the “science” is not good enough, which in the personal opinion of those of us writing this piece is a problem. We think that this blanket idea of “imprecise science” actually detracts from the actual concerns -mostly social and economic (monopolization, unexpected risks of cultivation of new cultivars, proprietary seeds etc.). How well-founded or rational is our fear of modified crops? What do we know and not know? We thought we would ask the scientist who recently publicly (broadcast on Swedish Public Radio by host Gutaf Klarin) ate a plate of CRISPR generated cabbage and broccoli and has been actively reaching out to people about the use of this technology in agriculture.

Further Reading and References

1. Original Broadcast of the CRISPR dinner from Sverige Radio by Gustaf Klarin (In Swedish, amenable to translation using Google Translate or similar other tools), September 5th, 2016

2. What was on the plate?

References

  1. http://time.com/4521582/2016-election-food/?iid=sr-link1
  2. RNA-guided genome editing in plants using a CRISPR-Cas system. (review)
  3. In vivo genome editing improves muscle function in a mouse model of Duchenne muscular dystrophy
  4. In vivo gene editing in dystrophic mouse muscle and muscle stem cells.
  5. Postnatal genome editing partially restores dystrophin expression in a mouse model of muscular dystrophy.

[/vc_column_text][vc_column_text css=”.vc_custom_1490812813499{background-color: #d3d3d3 !important;}”

An interview with Stefan Jansson

Q. You may be remembered as the first man to eat a CRISPR modified plant, how do you feel about that?

Well, I would of course prefer to be remembered because of my contributions to science, not just because I made some cooking once upon a time. But these two things are closely connected, if I have´t had the scientific credibility that I actually have, I would just have been regarded a ”mad scientist” doing a stunt and the meal would not have got the same positive attention.

Q. As a plant biologist, do you have concerns about genetically modified crops – for example, what if the gene modified leads to increased requirement for water etc.? What measures are in place to assess this?

As with all plant breeding techniques, one can of course create varieties that are better or worse for the environment. I do like those that are likely to have a positive impact but not those that are likely to have a negative impact, regardless of they are made with techniques that fall under the GMO legislation or not

Q. If CRISPR becomes a patented/proprietary technology it may restrict the adoption to richer countries and not actually benefit or ensure food security. What are your thoughts?

It would of course be better for the world with no restriction of access, but I also realize that we cannot change the patent legislation, and as companies are there to make money, we simply have to live with any restrictions that may come. I do hope that they will not restrict the use of those that need it most, but that is beyond yours and my control.

Q. Within the current norms of GMO versus non GMOs, is insertion of genes from the same plant (not-foreign) allowed?

It is considered a GMO, at least under the EU legislation.

Q. Do seed companies make new varieties using random mutagenesis using chemicals and radiation? Is this permitted?

Many breeders are still busy exploiting the variation created by mutagenesis programs conducted many decades ago, but I assume that also run new mutagenesis programs for some species. These are not considered GMO

Q. Plants have been selected by humans since the time we began agriculture. Would comparison of ancient plant genomes to current plant genomes be a good starting point to identify or make an inventory of desirable changes?

Indeed, and I have understood that some breeders do this

Q. Given that most genes in a plant that make protein are useful in some circumstance or other, how many genes do you think can we knock-off and ensure ‘improvement’?

It is of course only few genes that we can knock out and increase the fitness of the plant, if so evolution would probably have already selected those changes. But all genetic changes that we have made so for during the domestication have probably lead to a reduced fitness of the plant, but made them more useful to us in the agricultural system, so there is indeed a huge room for improvement also with new techniques.

Scouting for the forager ant

– Identifying the basis of labour division in carpenter ants

 

Short summary

What determines the way animals behave? Is this almost unalterably in their genes or in their environment or a complex interaction between what is within and without? A recent study explores the division of roles within the worker caste of the carpenter ant and shows for the first time that these roles can be reversed using mind-altering drugs (1).

 

“Minor”carpenter ants are the foragers

Anyone who has observed the incessant activity of a beehive or a column of ants between their nest and a food source can appreciate how wonderfully co-ordinated and orderly it all looks. This coordination is brought about by a fascinating division of labour – well separated roles of who must do what for the colony. How do insects develop these identities? This becomes especially intriguing in insect colonies in which all inhabitants are children of the same parents and hence genetically related to each other. In carpenter ants, which are the subject of this study, there are two castes within the female workers – minor and major. The minors are smaller do most of the foraging whereas the majors rarely forage. This was established by setting up an arena for foraging around an ant nest and measuring to which caste they belonged and how many individuals came out to forage. While foraging activity for both minor and majors increased with age of the ants, the minors still performed most of the foraging. Additionally, the lead foragers – called scouts were also mostly minors and older scouts were much better/faster foragers than young ones -even when they were foraging in unfamiliar arenas. Hence, the foraging activity was established as a minor worker specific behaviour in these ants.

 

What makes “minor” ants better at foraging?

So what determines the foraging behaviour of minors? Genetic differences were undermined by the relatedness of minors and majors (they are sisters (2)) – suggesting that the differences is unlikely to be due to differences in genes. Also, multiple studies suggest that such behaviours are likely to be controlled by epigenetic mechanisms. Epigenetic modifications are modifications of the genetic material, without changes to the genetic material/ DNA itself (check out a beautiful introduction to the world of epigentics from MinuteEarth in the video below). These modifications which are chemical groups added on proteins that bind DNA, determine the context in which genes are expressed i.e form a basis for the conversion of genotype to phenotype. In this study they focused on the presence of a chemical group (an acetyl group) on histones. Previous studies have suggested that these marks may determine caste specific behaviour in other eusocial insects.

Consistent with this idea, a drastic increase in foraging activity of both majors and minors was brought about by feeding them drugs that inhibit the enzymes that remove the acetyl marks from the histone – or histone deacetylase inhibitors (Valproic acid – used to treat mood disorders in humans and Trichostatin A).  However, the minors continued to forage more and performed almost all the scouting.

Molecules and mechanisms of caste-identity

The authors then determined the molecular mechanisms of how the acetyl marks were placed on the histones in the first place what genes were responsive to these changes. When they inhibited the enzyme responsible for this behaviour (the histone acetyl transferase domain of the Creb binding protein (CBP)) they saw a drastic decrease in scouting. This established the scouts as a distinct behavioural caste within the minors and suggested that acetylation of histones by CBP is the molecular mechanisms that generate this behaviour.

They then looked at what makes major and minor workers different from the perspective of gene expression and came up with a different idea. What if there was a basal behaviour – “to forage” in ants and this was actively suppressed by all these molecular mechanisms? This suggested that injecting drugs at an early stage may prevent this suppression of behaviour. Voilà! in ants injected with drugs (Trichostatin A) the majors started to forage actively! In this case, it seems like timing was everything (look above for what happens when the treatment is started later!). Surprisingly, even when tested as a whole colony (with minors), the majors upon treatment participated more often in foraging. To dissect this further, the authors directly inhibited a single enzyme HDAC1 (Histone deacetylase 1 also called Rpd3) and found the same increase in foraging activity in the majors. This suggests a central role for HDAC1 in repressing foraging behaviour in majors.

Learning from ants

Behaviours are baffling and possibly emerge from complex interactions between genes, how these genes get expressed and what triggers them. Such triggers can come from what we eat, what we smell, how we interact with our environment and one another. Animal behaviour – especially in the context of colonies or societies, is likely to involve intricate rules for function and order. Unraveling these rules is an exciting area of ongoing research. In a surprising but retrospectively sensible turn of events, the authors of this study have found that the division of labour among worker ants lies in the mind, is set up very early and can be reversed.

Acknowledgement

Thank you!  Riley J Graham for helping out with this post

More about the cool process of epigenetic inheritance from the wonderful Minute Earth

youtu.be/AvB0q3mg4sQ

An interview with Riley J. Graham

 

Q. You note in your study, that a carpenter ant colony in nature, maintains a 2:1 ratio of minors to major worker ant. What do you think are the mechanisms for maintaining that ratio? Is it likely to be the same mechanism (HDAC and HAT dependent) as you describe?

It’s difficult to say how this could occur, and there is likely a degree of variation in caste ratio in wild colonies. One of our ongoing questions is whether caste fate can be influenced during development by epigenetic drugs. To address this, we are developing methods to deliver controlled treatment doses to during larval development to determine if this influences caste fate. Such a result would strongly suggest that HDAC and HAT activity is important for regulating the generation of caste specific morphological traits, which could account for how this ratio is maintained in our experimental colonies.

Q. Will the drug reversed majors show increased foraging even when there is an abundance of resources? 

Yes, in fact this is precisely what we saw. All of our colonies were fed ad libitum, for 10 days after injection, and majors treated with HDACi foraged significantly more than controls. However, because minor workers can feed their major sisters after foraging, a mixed caste setting may keep majors full of food even when they never forage. To control for this type of between-caste effect we did a different test in which we separated major and minor workers and withheld sugar water. This ensured all of our test subjects had a similar motivation to exit the nest in search of sugar, and prevented intrinsic behaviors of one caste from biasing behavior of the other.

Q. Have you or others seen such role reversals/ caste reversals in a natural setting? For example – colonies that are stressed for food.

Camponotus floridanus and its relatives in the subfamily Formicinae are interesting because of their discrete morphological caste systems, (e.g. minor, major) but all eusocial insect species rely on some form of caste-based division of labor to survive. Brian Herb and colleagues reported differences in genome-wide patterns of DNA methylation between nurse and forager honeybees. These two groups are behavioral subcastes that arise as younger nurse workers age and progressively become active foragers later in life. Experimental reversion of foragers back to nurses caused a coordinated reversal of DNA methylation to reflect this behavioral change, suggesting epigenetic regulation of behavior is a common trait among social insects, and that behavioral castes are sensitive to environmental changes.

Q. What major contribution do you think will come out of studying eusocial insects like ants or honeybees when compared to solitary insects like fruit flies? 

Fruit flies do not exhibit the vast range of behaviors seen in social insects. Over evolutionary time, some eusocial insect species have acquired sophisticated division of labor strategies, enabling colonies to undertake complex collective tasks including nest architecture, cooperative brood care, and even horticulture, as in the leaf-cutter ants Atta and Acromyrmex. Given that single queens can give rise to millions of individuals in thier lifetime, epigenetic regulation, rather than genetic differences between individuals are expected to have an important role in the expression of caste specific traits. We have not found allelic predictors of caste identity in C. floridanus, suggesting that the exceptional phenotypic differences between major and minor workers are likely attributable to epigenetic mechanisms. Eusocial insects are therefore excellent models for the study of how epigenetic changes can contribute to morphological and behavioral variation.

Q. Earlier studies, for example those by Sokolwaski et. al. have shown single locus polymorphisms controlling foraging behaviour in fruit flies. In the light of this evidence, one might think that epigenetic control of foraging behaviour in ants could be an adaptation to their social lives. What do you think?

I believe you are referring to Marla Sokolowski’s work showing that mutations in the gene foraging (for) can lead to differences in foraging behavior in flies. Such polymorphisms might cause variation in foraging behavior in flies, but in ants, this SNP would contribute to increased foraging in all castes, perhaps even the queen. Given that queen foraging would typically be highly damaging to a colony’s survival, this SNP would be evolutionarily suppressed in queens, but could become positively selected for in minor workers. This variation in the fitness landscape between castes is one reason to think that epigenetic regulators could be important when different castes need to express different genetic profiles from a common genome. Molecular heterochrony allows different genes to be expressed at different times in an animal’s life, and while a very young queen might benefit from a SNP causing increased foraging, a mature queen would not. The genome’s ability to activate or suppress genes depending on caste and age is an important aspect of social insect biology that likely relies on epigenetic mechanisms.

Q. Carpenter ant workers in the study are genetically related, which led you to investigate the possible epigenetic mechanisms determining the cast specific behaviours. Would you expect genetic bases for caste identity in species where genetic relatedness among the workers is not as high as the carpenter ants?

A number of studies describing genetic aspects of caste fate also suggest that the interaction of each genotype with the environment influences caste fate. In this light, it seems that genetic variation primarily alters an organism’s likelihood of becoming a particular caste, rather than rigidly determining caste fate. Allelic predictors of caste identity were not found in our ant species, suggesting behavioral and morphological phenotypes in social insects are likely the product of a gene by environment interaction that is facilitated by the epigenome.

Q. Insect colonies are fascinating systems to study genetic links to behaviour. Your study added valuable insights in mechanisms of determination of a caste specific behaviour. How easy (or hard) would it be to study more complex behaviours in other social animals (not necessarily insects)? 

Our work is among the first to look for indications that social insect behavior can be altered by the epigenome without any change in DNA sequence. Ants are a fascinating middle ground between the moderate behavioral variability seen in solitary insects, and the overwhelming complexity of higher order social behaviors, such as the relationships between kin grooming and reproductive hierarchies in primates. As scientists begin to consider more complex social features, they must also consider the vast array of behaviors that can be performed by each individual. In the case of kin grooming, researchers might be compelled to annotate a complex and fluid social network of kin grooming interactions, which may require a model that considers the behavior of each animal, as well as the behaviors of their social partners. This can get complicated quickly. This is not an insurmountable goal, but it is certainly harder to conceptualize and design experiments around. However, any molecular variation in the population that robustly contributes to behavior can hypothetically be measured, so it is not impossible to study organisms with greater behavioral complexity.

That which could kill you can also cure you

An opensource database for therapeutic uses of venom

 

About this database:

Popular myth and folktales often involve a hero setting out in search of miraculous cures from deep within the unexplored and dangerous forest, slaying wild beasts and sampling rare herbs to save a doomed civilization. The use of toxic substances including toad and snake venom have been described in traditional medicine [1,5].

Venoms are substances produced by one animal to defend itself from predators and/or prey on other animals [2]. They are usually a complex mixture of organic compounds, the most potent of which have been found to be peptides, for example, the conotoxins from cone snails – are peptide neurotoxins made of small chains of 10-30 amino acids [3].

Even related species seem to have evolved their own cocktail or venom composition independently, giving rise to a huge diversity in venom types – over 10 million types [4,7]. Components of venoms have incredible specificity and speed of action, for example, a snake neurotoxic venom is so specific in its action that it only affects particular ion channels in neurons [5,6], a lizard neurotoxin must act fast enough to immobilize the prey so that it cannot disappear into a crevice. Both the specificity and speed have tremendous implications for therapeutics as delivering therapeutics to the right place in a timely manner is still quite a challenge. However, the diversity of venoms and their intrinsic toxicity pose a challenge in the systematic testing of these compounds for therapeutic use, further, this area of biology is not well studied and consequently there are huge gaps in our understanding of the mechanisms by which these compounds act. The creation of a new online database which allows easy retrieval of studies that have analyzed venoms from the viewpoint of their therapeutic use is a step in the right direction (https://venomkb.tatonettilab.org).

 

What did they do?

A striking feature of modern biology is the speed at which scientists gather information is surpassed by the ability of even specialists to assimilate them. In this project, the authors have made a database of venoms and their therapeutic uses called the Venom Knowledge Base or VenomKB. The database was constructed by scanning over 22 million studies whose titles and abstracts (research summaries) are cataloged in MEDLINE in a searchable format. Using the medical subheading “Venoms/therapeutic use” they retrieved 5117 pertinent articles. 275 of these articles were then sifted manually to form the first table of the database with ID, Venoms, Effects, PubMedID (link to the original research), and whether or not the entry is flagged for review.

Example:

271 |cobra venom cytotoxin| anticancer| 22888519|Not Flagged

The 2nd and 3rd tables of the database use automated searches to extract relevant information. The logic of both methods is described in the paper and the code is publicly available in a GitHub repository. The second method also results in the classification of venom and effect, where the effect is often a disease state.

Example:

26493| cobra neurotoxin proteins| paralysis| 7603413| Not Flagged

The third method is interesting because it takes a semantic approach and classifies data into three categories – subject, predicate and object

11567| cobra venom| causes| analgesia| 16539838| Not Flagged

The database right now grapples with control of false positives, reporting things as venom and effect when in fact it is not even a venom. However, both automated methods at least in the first pass appear to be quite sensitive in picking up venom-effect pairs. Both the automated search based tables have been manually scrutinized for obvious misclassification, however, the user will still have to follow the thread back to the original research paper to really get an idea of the nature of the link of the venom and the therapy and whether or not it is relevant. In spite of this, the ability to access data in this manner will promote more data-based hypothesis generation.This is relevant not only for venoms and their therapeutic effects but to all of biology which is rapidly becoming a data-driven exercise.

An interview with Joe Romano & Nick Tatonetti

Q. When making a database like the Venom Knowledge Base, what are the major challenges – in the design, representation, maintenance, storage, retrieval etc.? What are the resources required to maintain it and how is such work funded?

In our experience, the most challenging portion of designing VenomKB was defining semantics – essentially, the words we chose and what they mean intuitively to readers. It sounds like something that should be relatively straightforward, but since venom research is a pretty new field, it can be tough to achieve. For example, depending on the portion of the knowledge base you are looking at, some records can be “venom [x] treats condition/disease [y]”, or it could be “concept [x] does [some kind of action] to concept [y]” (where either [x] or [y] can be a venom compound). It sounds kind of nuanced, but in designing something that is meant to be read by the public, it creates a pretty important distinction. As for resources and funding, one of the great aspects of this study is that anyone with some web design experience can create a site like VenomKB for very minimal cost using resources that are totally available to the public. To give a rough figure, it costs us less than $50 per month to keep the site running and maintained, which is pretty well within the resources of many research groups. And overall, the research we do in our lab is funded by different branches of the National Institutes of Health (like much of the university-based biomedical research in the United States). Having a non-commercial funding source has its advantages, such as the fact that we can release everything related to VenomKB free of charge to anyone worldwide with an internet connection.

Q. At this moment, if I understand correctly -you have used search terms and string matches? Isn’t this a limited approach given the lack of standard ways of naming things in the field?

Actually, all of the methods we used to extract useful information from the literature were a bit more subtle than basic string search terms. In each case, we used medical and biological terminologies – essentially structured dictionaries that help with standardization in biomedical data – to find concepts and then transform those concepts into the data records you can look up in VenomKB. When we store the information in the database, we use the readable strings so people can understand what the data refer to, but the tools and algorithms that sit behind the scenes attach a semantic meaning to the text that the computer can understand and use to identify synonyms and duplicate meanings. Granted, these approaches aren’t perfect, for exactly the reason you mention (there isn’t great consistency presently in naming venoms and their components). Since there is no terminology that natively understands venoms, we use other ones that we feel come closest to fit our needs. Still, we acknowledge that there are many ways to improve our algorithms, and one of the projects we are currently working on is a resource that will standardize the names of venoms – hopefully solving this issue altogether!

Q. Traditional sources of knowledge (specific to communities, tribal healers, not part of allopathy for eg. ayurveda) are not on Pubmed in the form of papers, would it be possible to extend your database to Google books or libraries to mine the information there?

Yes, we have in fact considered this. There are really two challenges that prevented us from using sources like these at this stage: (1) In general, it’s harder to find data sets that are fully open access like PubMed is – for example, Google Books has faced legal issues due to claims of copyright infringement, and we have to be really cautious about things that could cause issues of legality. (2) Since PubMed almost always have abstracts available for view and download, much of the important information is condensed into a small amount of space that can be downloaded and searched through in its entirety on a modern computer. If we were to apply the same method to large collections of books, we might pick up a lot of irrelevant data (not to mention need far more computational power to perform the searches with). In short, yes, we do hope to incorporate data sets such as these for the very reason you mention, but we still need to define the best approach for doing so.

Q. You call this the Venom Knowledge Base, when do you think knowledge emerges from facts? For instance how do you account for studies that are badly designed or cannot be reproduced? There is general trend in science to create new information without knowing how to assimilate it, do you agree with this? What is the solution?

We totally agree with this statement. Bad science is an issue that remains regardless of how refined the peer-review process is. One of the features that VenomKB offers is the ability to connect purported knowledge (e.g., that a venom treats a certain disease, as claimed by a single study) to a number of biological databases that researchers can use to try and validate the claims made in the studies. This also touches on yet another study we’re in the process of designing right now – using information in VenomKB, we are planning to select 100 venoms and use modern genomics techniques to try and both validate information existing in VenomKB and to make new discoveries that may not have been picked up in existing literature. Either way, since VenomKB is a growing resource, we hope to add all of the validation and discovery study results as new modules to the knowledge base, separate but inherently linked to the existing database tables. Another functionality that helps transform raw data / information to actual knowledge is that VenomKB encourages community involvement. If you see something that looks suspicious, or maybe incorrectly included, there’s a button that lets us know that something probably shouldn’t be there, and we will take a look and make the decision whether to delete it or not. This helps bolster confidence in the data, and lets anyone at all get involved, even if they are not a scientist.

Q. If there is a student (high school, undergraduates) who wants to take on a science project -What kind of questions could they ask with this data?How accessible is the information to them? How can they contribute to this research?

One of the major draws of data science is that studies like this really have limitless potential for reuse. A few areas that I think would be particularly good for young biologists and computational biologists to explore include phylogenetic analysis of venomous animals, prediction of gene sequences that give rise to venom proteins, comparative analysis of venom proteins with drugs that are already approved by the FDA, and comprehensive literature reviews that aggregate known uses for a specific species’ venom. These are all examples of studies that we need in order to validate the reliability of VenomKB, and are straightforward to carry out using existing tools (usually open-source). With good advising, any of these projects could be within grasp of students who are relatively new to computational biology.

Q. What kind of access policies do you have for potential commercial use of the information? How can the industry contribute to sustaining this effort? What drove you to keep this in the public space, given the general trend of companies to make databases like this, at exorbitant prices, for private use?

VenomKB is fully open-access and open source, and we completely intend for it to remain that way. While we haven’t formally discussed what restrictions we may or may not place on reuse by corporate entities, we encourage any researchers – whether publicly or privately funded – to use the site to guide research that will benefit human health. Many pharmaceutical companies are devoting a lot of resources to studying ‘biologic’ drugs, a category that venom compounds definitely fall into. We like to hear from people who would like to make use of the data in their own research, so if any readers think they may be interested in doing so, drop us a line! To get to the final part of this question, open access is a rapidly growing trend in academic research. The journal we published in – Scientific Data – is entirely open access, as are a large number of others. The thought is that if research is meant to help humanity and benefit the largest number of people, we need to eliminate some of the barriers posed by cost and corporate ownership. Not all research can be done for free – the drug design process in particular requires major financial investments. However, things that can be done in settings like an academic research lab should try to make their data available and free if possible, for the purpose of lowering those same barriers.

References

1. “The Medicinal Use of Snakes in China”, Institute for Traditional Medicine, Portland, Oregon, 1997

2.  The toxicogenomic multiverse: convergent recruitment of proteins into animal venoms. Fry BG, Roelants K, et al., Annu Rev Genomics Hum Genet. 2009

3. Novel peptides of therapeutic promise from Indian Conidae. Gowd KH et al.,  Ann N Y Acad Sci. 2005

4. Complex cocktails: the evolutionary novelty of venoms. Casewell et al.,  Trends Ecol Evol. 2013

5. The Bite That Heals -Scientists are unlocking the medical potential of venom. By Jennifer S. Holland for National Geographic.

6. Alpha neurotoxins. Carmel M Barber et al., Toxicon. 2013

7. VenomKB, a new knowledge base for facilitating the validation of putative venom therapies. Romano JD and Tatonetti NP. Sci Data. 2015

An elephant’s formula for cancer free living:

nipping deviant cells in the bud!

 

Short Summary

A new study (1) sheds light on how elephants may be protected from cancers. In a different approach to understand cancers, scientists are now analyzing animals which do not seem to get affected by this disease (2). Such analysis is likely to reveal protective mechanisms that in turn may help control or prevent cancers. This recent study has looked at why elephants are less prone to cancer given their size (large cell numbers) and age (elephants can live up to 70 years) (1). A key finding is that elephants have multiple copies of a gene (p53), known to protect cells from becoming cancerous (1). The authors provide preliminary evidence from elephant blood cells suggesting that these cells are more likely to choose cell death over cell repair when faced with damage to their DNA when compared with human cells (1). This is one possible mechanism to prevent cancers which may result from incorrect repair.

 

Introduction

Cancer [?]

The very word invokes terror and an awakening to the knowledge that each of us will be touched by this condition. The horror of our own cells, dividing, moving and invading – beyond the control of our body. It makes us acutely aware of a smoker nearby, of the smell of paint, of asbestos, of burnt food, of the UV rays from the sun ….and our mind quickly spirals into the endless lists of chemicals and agents that are associated with an increased risk of cancer. It seems as though we are navigating through a world that has set out to make us sick.

Our current knowledge of cancer suggests that larger animals (with more number of cells) with increased longevity (requiring their cells to be replenished more number of times) are more likely to develop cancer (2,3). This however was found not to be the case when the authors looked at the data collected by the San Diego Zoo over the last 14 years on cause of death of over 36 different mammalian species (1). Large animals such as elephants (animals weighing approximately 4500 kgs) do not seem to be dying as often as expected from cancer. Using the data from the Elephant Encyclopedia database on the cause of death for 644 elephants, the authors conservatively estimated about 4.81% of these to be cancer related deaths (more precisely, estimated cancer incidence in elephants), in humans, this number is closer to 20% or 1 in 5 people for cancer related deaths and about 40% for probability of developing cancer (these numbers are from data for the American population) (4).

What Did They Do and Find?

In order to understand the reason for this low cancer incidence in elephants, the authors turned to the sequences from the African Elephant genome. Strikingly they found that the elephant genome had 19 extra copies (compared to humans) of a gene known to be a central tumour suppressor in humans called Tumour Protein p53 (5). They also performed additional sequencing from one African elephant to confirm this finding. The 19 extra copies were found to be slightly different from the ancestral copy common to humans and elephants raising interesting questions about their origin and whether they were specifically maintained for their ability confer resistance to cancers. They then confirmed that these copies are most likely functional, as it is the product made from the gene that is thought to have a role in protecting cells from DNA damage. The contribution of these increased number of copies of p53 is still unclear.

In some interesting but preliminary experiments, the authors found that when subjected to a DNA damaging agent such as high doses of radiation, cells from elephants (blood cells) were more likely to die when compared to human cells from similar tissues. This suggests that there is a lowered tolerance for DNA damage in elephants – and a shift towards death versus reliance on repair. Surprisingly, unlike humans, elephant blood cells did not show increased p53 expression upon exposure to radiation, thus limiting the scope of this assay. Using a drug (doxyrubicin), the authors find that elephant fibroblast cells also have an increased percentage of death than human cells. All the experiments were done from cells derived from a single African elephant. Parallel experiments of 6 Asian elephants of varying agents found a similar tendency towards increased cell death upon exposure to DNA damaging agents.

The Take Home

When cells sense DNA damage, they respond either by stopping to repair the damage or by entering the cell death pathway. These choices are often made depending on the extent of damage and how repairable it is. p53 is involved in making this choice. In elephants, more cells seem to die rather than risk imperfect repair that may lead to the development of cancers.

This study provides the framework for developing an evolutionary understanding of cancer and cancer related genes. While the idea lacks confirmation and is very nascent, it powers thinking and suggests experiments that will help us understand cancer better. For instance – What about other animals which have an increased risk of cancer such as the Tasmanian devils?How much of the cancer risk across species is related to p53? What is the mechanism by which p53 does this? Can we intervene with more copies of p53 to alleviate the risk of cancer? What would the accompanying increased cell death imply for us? What does it imply for the elephants? Do all tissues use the same mechanisms or are different tissues different?

Thanks!

@ Reety for pointing us to this work

@ Sreeja – Here is our first post on cancer, we will look out for more. Thank you for reading and sharing your thoughts.

 

More information on this research

Details about the authors, their research and how you can contribute can be found here.

Cancer Research

 

 

Cancer: What is it?

Our body is composed of tissues which are exquisite in their structure and function, they in turn are made up structurally and functionally distinct cell types, which are controlled in number, position and function. Cells in an organ or tissue divide to make up, replenish and rejuvenate us. The number of divisions a cell is allowed to make is dependent and controlled by which cell type it is, which tissue it is from, by its environment and many such factors which constantly monitor if, when and how many times this particular cell should divide. When a cell divides, there are multiple checkpoints to ensure that it goes through the entire process of making two cells without error. During each division mistakes are corrected and cells in which the mistakes are not corrected are targeted to die by a well orchestrated process of cell death. Given the large number of cells, the large number of divisions required to make and sustain our bodies and the large number of mistakes that can happen – things do go wrong in spite of these checks. This results in a cell that continues to divide, evades the signals to die and instead of sensing its place and function, starts on a trajectory of its own. Scientists now believe that even with the large number of corrective mechanisms, such aberrant cells may be common. However, over time, they may self-regress or be contained as benign lesions that do not spread. On the other hand, such cells can accumulate more damage and start invading into surrounding tissue and the reasons and mechanisms of this transition are just being understood. This invading mass of aberrant cells can now start diverting resources and interfere with the normal function of the body (6-7). This is of course a simplified view of a tumour, a malignancy or a cancer, nevertheless it provides us with a framework to understanding a recent work on why cancers may be less frequent than expected (given their large body size, number of cells, therefore number of cell divisions) in elephants.

Go back to the article

Would you like to read this article in Hindi? Here is a translation by Dr. Sweta Srivastava here

An interview with Dr. Abegglen and Dr. Schiffman

Q. Is it your expectation that increased copies of p53, if introduced into human cells would also result in more death than repair?

An extra copy of p53 already has been introduced into mice, and these are called ‘super p53’ mice.These mice were cancer resistant and the scientists doing these experiments observed increased apoptosis in response to ionizing irradiation, suggesting that increased copies of the TP53 gene leads to increased DNA damage response. The elephant retrogenes do not have the same structure compared to elephant ancestral (human) TP53 gene, but we do think that introducing elephant TP53 to human cells will lead to increased cell death. We are currently performing experiments to answer this question.

Q. You have analyzed very few elephants (with a possible a bias to zoo animals) and very few cell types. Can you claim that these findings are general based on this?

We have actually tested more elephants than we included in the paper. So far, of the animals that we have tested (4 African elephants and 10 Asian elephants), cells from all of the elephants have responded to DNA damage with increased apoptosis (cell death) compared to human cells. As we follow up on this initial study, we plan to test more elephants for an increased apoptotic response. Based on our current data, we hypothesize that increased apoptotic response to DNA damage is a general mechanism employed by elephants that express functional copies of TP53 retrogenes.

Q. In your paper, you suggest an explanation for the large number of p53 gene copies in African Elephants – could you tell us more about this, when did it happen, which other animals have it? For instance, is this copy number increase for p53 common to all large mammals/ animals? What about the Tasmanian devil, why are they at such high risk for cancers?

One of the elephants closest living relative with available genomic data is the hyrax. This animal only has 1 copy (2 alleles) of TP53, and the hyrax and elephant lineages diverged 54-65 million years ago. Because Asian elephants also have extra copies of TP53, we can place the gain of the extra copies of TP53 retrogenes before the split of the African and Asian elephant species nearly 7 to 9 million years ago. So the answer for when the retrogenes appeared in the genome is sometime between 7 and 65 million years ago. We have not seen TP53 gene copy number increases in large mammals, other than elephants. Tasmanian devils are actually susceptible to a transmissible type of cancer. Their rates of cancer are high because cancer cells are spread by biting, and the cells grow in the bitten animal because they escape immune recognition. This immune escape by the cancer is increased due to the large amount of inbreeding by Tasmanian devils that result in limited genetic diversity.

Q. If we let cells (humans and elephants) divide in the laboratory is it possible to find accumulation of mutations that lead to cancer and count how many divisions it took to accumulate enough mutations? In other words, could you functionally test your idea in a laboratory setting with some functional readout of tumour formation?

Unfortunately, non-transformed, primary cells (cells taken from normal animal tissues) have a limited number of cell divisions before they senescence or stop dividing. Some cells, when you allow them to divide in culture will develop mutations that lead to spontaneous transformation that allows them to divide indefinitely in culture. So far, none of the elephant cells that we have cultured in the lab have spontaneously transformed. Spontaneous transformation of fibroblasts in culture is not a frequent event, and just because we haven’t yet seen spontaneous transformation of elephant cells doesn’t mean that it can’t happen. However, we would hypothesize that spontaneous transformation of elephant cells would occur at a lower rate compared to human cells, and we are currently discussing ways to test this hypothesis.

Q. Does the elephant genome have signatures of lowered DNA repair? Your work suggests that elephants are more intolerant to mutations – do they in fact have a lower rate of mutation? If yes, then what bearing do you think this has on their evolvability? How different are these for the Asian and African elephants?

We actually didn’t find less DNA repair in elephant cells compared to human. The rate of the actual DNA repair for the two is actually similar, which indicates that a proportion of the elephant cells are actually repairing their DNA. However, we did observe more apoptosis (cell death) in the elephant cells compared to human cells. Determining the rate of mutation accumulation in elephants is currently challenging. Part of the reason this is difficult to do is because very few elephants have been sequenced, and we haven’t yet identified the amount of normal sequence variation in elephants.

Q. Do you plan to look at other models such as Hydra and the Naked mole rat? Given that the first has an incredible capacity for regeneration and the second seems to be protected from cancer.

We do plan to explore other models. In fact, we are currently looking at cancer from the opposite perspective – why do some species, like dogs, get more cancer and what can we learn about canine cancer to inform how we treat people with the same types of cancers? As for cancer resistance, mechanisms for cancer resistance in the naked mole rat have already been reported. These animals cells produce high-molecular-mass Hyaluronan, which is involved in contact inhibition. Contact inhibition is a different mechanism of cancer resistance than what we have discovered in elephants. It appears that difference species have evolved different mechanisms of cancer resistance.

Q. Can you tell us more about your idea of looking at cancer from an evolutionary perspective?

In our lab at the University of Utah, we try to understand the evolutionary basis of cancer development both on the large scale (why did cancer resistance evolve over hundreds of millions of years of evolution) and also on the short term (how do subclones within tumor cells evolve in a single person). By applying evolutionary theory to cancer development, we can learn why cancer develops and then how to use this information to come up with evolutionary-based strategies for treatment.

Beauty is but skin deep?

-Mircroorganisms in our skin respond to vitamin B12

 

Background

Many of the microorganisms that inhabit our skin surface i.e. the skin-microflora, are ‘Opportunistic pathogens’, usually harmless, perhaps beneficial, but capable of causing harm. We know very little about the conditions that  turn these microorganisms against us. The bacterium, Propionibacterium acnes for example, is present ubiquitously on human skin, but causes acne in a subset of individuals (1). This provides an interesting opportunity to study conditions that lead to acne formation (1).

Acne is a severe inflammation of the skin brought about in many ways and is associated with particular diets (dairy consumption for instance) (2-5). In a recent study, authors analyzed the skin-microflora of people presenting with and without acne (6). They found that a distinct set of genes were active in the microflora of people with acne when compared to that of people without acne.

 

What did they find?

What is the reason behind these differences? Are the microflora sensing something different in their environment (read, the skin of their human host)? To address this question, the authors focussed on P. acnes, notorious for getting deep inside the skin and causing inflammation (resulting in acne) (2).

The authors of this study then investigated the nature of the genes expressed by this bacteria uniquely in people with acne. They found diminished activity of genes that help in the synthesis of vitamin B12 and fatty acids. The production of Vitamin B12 is a multi-step process, requiring multiple different enzymes.The authors found that many of these enzymes were produced at much lower levels by the bacteria in individuals with acne. Clinicians have noted for a long time that some individuals develop acne (technically known as aceniform eruptions) as a side-effect of B12 supplementation (7). In this study B12 supplementation was given to a group of 10 people, 1 of whom developed acne within one week. Further investigations suggested that P. acnes responds to supplementation of B12 to the host by shutting down its own pathways for B12 synthesis. Such an auto-regulation for B12 production has been seen in other bacteria as well.

So, what is happening in patients who developed acne? Interestingly, in the patient who developed acne upon B12 supplementation, the authors were able to show that one of the intermediates of B12 synthesis was now being shunted into porphyrin synthesis. In lab grown cultures of P. acne, supplementation of B12 into the growth medium decreased its B12 synthesis and increased porphyrin synthesis. Porphyrins are known inflammatory agents and may underlie the development of acne. In fact, in other clinical studies patients who responded positively to acne treatment also showed decreased porphyrin levels (8).

 

Conclusions

Take-homes from this study

Not only does this study suggest an explanation for the why some people develop acne when they undergo B12 supplementation, it reveals an intricate coupling between humans and the microorganisms that inhabit them. This is the first study to our knowledge that shows metabolic effects in bacteria upon changes in the human nutrient condition. There are many open questions that emerge from this study, how do microorganisms sense the nutrient state of the host? Is metabolic shift a common theme in microorganisms that are usually friendly and then turn deadly?

 

What is cool about this study?

In a crowded microbial environment, like the human skin, picking up the changes in the activity of genes of one specific bacteria is non-trivial. It is also hard to get enough information to reconstruct pathways and links between these gene activity states to perform robust analysis across people (taking into account, the variations between individuals in the same group).

 

Pinch of Salt:

Don’t throw away your B12 supplements just yet!

It is important to understand, that although this study provides an explanation for why some people develop acne upon Vitamin B12 supplementation (7), nothing in this study suggests that people should stop taking Vitamin B12 supplements. This is particularly relevant as Vitamin B12 remains a problematic deficiency especially for vegetarians and vegans, with consequences for both physical and mental health (9-11).

References

1. About Propionibacterium acnes and its relationship to acne

2. “Epidemiology of acne vulgaris.” K.Bhate and H.C. Williams, Br J Dermatol. 2013

3. “The role of diet in acne: facts and controversies.” Batya B. Davidovici and Ronni Wolf, Clin Dermatol. 2010

4. Does diet really affect acne?” H. R. Ferdowsian and S. Levin, Skin Therapy Lett. 2010

5. “Evidence for acne-promoting effects of milk and other insulinotropic dairy products.” Melnik B.C., Nestle Nutr Workshop Ser Pediatr Program., 2011

6 .“Vitamin B12 modulates the transcriptome of the skin microbiota in acne pathogenesis.” Dezhi Kang et al., Sci Transl Med. 2015

7. “Vitamin B12-induced acneiform eruption.” Ilknur Balta and Pinar Ozuguz, Cutan Ocul Toxicol. 2014

8. “In vivo porphyrin production by P. acnes in untreated acne patients and its modulation by acne treatment.” Claudia Borelli et al., Acta Derm Venereol. 2006

9. “The neurology of folic acid deficiency.” E.H. Reynolds, Handb Clin Neurol. 2014

10.  “Vitamin B12 deficiency.” Alesia Hunt et al., BMJ. 2014

11. About Vitamin B12

Clocking In

Are we expecting the night owls to be the early birds?

 

Snap-shot

Subjecting shift workers to a schedule that is better suited to their internal body-clock (sleep-wake cycle) affects the general well-being of the workers (5). It results in better (quality and duration of) sleep (5). The important feature of this study, according to us, is its potential implications to human productivity.

 

Background for this work

People can be classified into chronotypes based on their internal body-clock that determines the time when they wake up and when they go to sleep (1).

The internal clock (circadian rhythm) is light entrained in humans, but is linked to a multitude of seemingly unrelated features of human life and well-being such as peak of physical activity (2-4).

There are known detrimental effects of going against the body-clock as seen in simulated night shift work (5). There are very few studies on whether aligning work shifts to a person’s internal clock (chronotype) can benefit them.

 

What did they do?

Workers in a factory (n= 114) were divided into 4 classes depending on their internal body-clock determined sleep-wake cycles as morning people (Early1 and Early2) and night people (Late1 and  Late2) with Early1 and Late2 representing  the extreme groups.

All the workers were subject to two kinds of schedules (shown in the cartoon above) –

Schedule 1 : Standard 2-2-2 schedule (2 days each of Morning, Evening and Night shifts) irrespective of their body clock.

Schedule 2: Chronotype Adjusted(schedule optimized to match the body clock), Early1: no night shift, Late 2: no morning shifts, Early 2:  more morning and fewer evening shifts and Late1: more evening and fewer morning shifts.

What did they find?

They measured the duration and quality of sleep on workdays and freedays, during the Standard Schedule 1, upon shift to Chronotype Adjusted Schedule 2 and towards the end of Chronotype Adjusted Schedule 2.

The extreme chronotypes (Early1 and Late2) benefited the most in term of work day duration of the sleep and the quality of the sleep due to alignment of the work schedule with their respective chronotypes (Schedule 2).

The difference between the sleep mid-point on workdays and on the freedays was used as another measure for changes in the quality of sleep. Surprisingly only Early 1 group showed significant improvement in this measure and not other extreme chronotype group, Late 2.

It is becoming increasingly rare for authors to discuss the limitations of their studies in their paper itself. We were therefore happy to note that Vetter et al., provide a thought provoking discussion which includes in detail the limitations of their study.

Pinch of salt:

Short intervention period (total study period of 6 months), mostly male participants, low number of participants at extreme chronotypes (Early1 and Late2 chronotype), lack of randomization.

The self- reported benefits  (measured as ‘workday well being’ and ‘satisfaction with time for social activities’) did not show a uniform effect in the different classes. Better measures, longer study period may be required to clarify or substantiate these findings.

Open Questions:

How will it impact the overall productivity of the organization if shifts were arranged based on chronotype?

Are there other physiological parameters that such studies could monitor, such as weight gain/loss, appetite, episodes of illness?

Emulsifiers bring gut bacteria too close for comfort

Feeding mice with artifical emulsifiers impacts their metabolism

Snap-shot of the study

Emulsifiers are used extensively in the food we eat (ice creams, biscuits etc.). This study examines the effect of feeding mice emulsifiers both in their food and drink (5).

Mice fed on emulsifiers (Carboxymethylycellulose CMC and Polysorbate P-80) showed increased appetite. They also showed signs of low-grade inflammation in the gut and increased fat deposition. The authors attribute these effects to changes in the gut microbiota. While the number of microorganisms in the gut was not altered by diet containing emulsifiers, the kinds of microbiota were completely different. The mucus lining was also depleted and the microbes were closer to the cells in the gut, possibly causing the inflammation. While this study has been done on mice, perhaps the quality and quantity of our microbiota and their response to emulsifiers has some bearing for us too.

What did they do?

 

They added widely used  emulsifiers- Carboxymethyl cellulose (E466) and Polysorbate-80 (E433) to the food and drinking water of young mice at equivalent concentrations commonly used in human food.

They measured the abundance and diversity of the gut microbiota, inflammation of the gut (colitis) and also metabolic disorders (fat accumulation, increase in food intake and fasting blood sugar levels) in the emulsifier-fed mice and compared to control mice (no emulsifier in food or drink)

 

What did they find?

These mice (treated) had same number of bacteria (in their gut) compared to mice that were not fed emulsifier (control mice). The types of microbes however was quite different. The microbes were also found closer to the gut that in control mice. The treated mice showed increased appetite, followed by  fat deposition and low grade inflammation of the gut. Interestingly, transplantation of the microorganisms of the gut from emulsifier fed animals into germ-free mice also resulted in increased fat deposition and inflammation of the gut. Suggesting that changes in the microbiota caused by the emulsifiers may be sufficient to cause the metabolic dysfunction and observed inflammation.

Erosion of the protective mucosal layer around gut epithelium in emulsifier-fed mice resulting in reducing the separation between the microbiota and the gut epithelium. Emulsifiers caused a marked change in gut microbiota composition – Higher pro-inflammatory microbiota including the bacterial species that are the leading cause of colitis like Bilophila and Helicobacter. Changed gut microbiota in emulsifier-fed mice increased gut inflammation and colitis. Emulsifier-fed mice also show – dysregulation of blood sugar levels (mild diabetes), increased food consumption correlated with increased adiposity(fat deposition) and weight gain. In older mice (4 months old) the changes persisted for more that 6 weeks even after emulsifiers were stopped. The observed effects of emulsifiers are exclusively due to the change in gut microbiota as the emulsifiers did not show any effect in mice having no gut microbiota (germ free mice). Interestingly such germ free mice become labile to the effects of the emulsifiers if the regular gut microbiome is reintroduced in them.

 

Background to the study

An undisturbed gut flora is emerging as an important factor in health versus disease (1). Multiple different physiological conditions including obesity and type 2 Diabetes are now associated with changes in the gut microflora (2-3). Recent studies have found that artificial sweeteners can cause blood sugar related disorders in humans (4).

 

Take-home and implications

This necessitates a reevaluation of what goes into our food, how it affects our gut microbiota and our health. Standard food safety tests include toxicity and carcinogenicity (ability to cause cancer), however, the importance not perturbing the natural flora of the intestine is becoming clear only now. These findings suggest the following in mice- intake of food/drink containing emulsifiers leads to weight gain and disorders such as diabetes, by directly increasing food intake. These findings need to be verified in humans.The intriguing  realization that dawns on someone after looking at this study is that not just the quantity, but also the quality of the microorganisms in the mouse gut matters. In humans the importance of gut microbial diversity has been documented in other contexts (1-3, video below – courtesy MinuteEarth)

Limitations and Open Questions

Only 2 synthetic emulsifiers have been tested. We feel that this work makes a strong argument for the development of assay systems that monitor microbial health (especially gut microbes) for compounds added to food, medicines etc. Given that the findings have such strong implications, we hope to see in the future a wider spectrum of compounds (including  the more natural products like lecithin) examined similarly by the authors and others.

The authors have only discussed in brief the possible mechanism underlying the change in the microbial population or how these changes result in increased inflammation. This remains a major open question.

Germ free mice already have a really bad situation in their gut, they are somewhat prone to inflammation. It is important to bear this in mind while interpreting the results of the fecal transplantation into germ free mice.

The study is a mouse study, it remains to be extended to humans.

An interview with Dr. Andrew Gewirtz

Q. From your work, it is clear that altered microbiota could lead to weight gain, fat deposition and the loss of the ability to control blood sugar levels, can this be reversed by altering the microbiota?

Our studies in mice indicate it is reversible but it takes some time.

Q. How do you think the emulsifiers are changing the gut microbiota? Can you elaborate on some potential mechanisms?

They seem to promote bacteria breaching the mucus, which promotes inflammation, which changes bacterial populations, possibly by favoring detrimental bacteria.

Q. Are you suggesting that the normal gut flora under different conditions (presence absence of emulsifiers) could turn pro-inflammatory? Are the other missing microbiota (in the presence of emulsifiers) keeping them in check under normal circumstances?

Yes

Q. What according to you are the major caveats of your study?

It is a mouse study.

Q. Did you face challenges in publishing this work, given that it has such strong implications?

Some reviewers suggested a dialog with food industry prior to publication but we argued our tax payer funded research did not require such approval. Nature editors agreed with us.

Q. Do you plan to take this study forward in humans? What would be a suitable cohort for such study?

Yes. Probably start with healthy college students.

Q. Your work clearly has implications for how we decide what to put in our foods.. What Changes would you suggest to the current process by which such compounds are screened, approved and used?

I think major overhaul is needed. Both more tests are needed and more information made readily available to consumers.

Q. Has your study affected your life and food choices?

Yes, my family has cut our consumption of processed foods in general and emulsifiers in particular.

Related Viewing

Teixobactin: Can this new antibiotic help us sail through the doldrums of drug resistance?

iChip based discovery of a potent novel antibiotic

 

Why do we need a new antibiotic?

What can happen if you self-medicate on an antibiotic or do not finish a course of antibiotics that has been prescribed for you? When cattle are fed indiscriminately on antibiotics to keep them healthy? When sewage from hospitals is not completely treated and released into the community? The microbes that survive in these environments stop responding to antibiotics around them (1). Our world at present faces a daunting task of treating people infected with resistant forms of many bacteria. Often clinicians have to resort to potent broad spectrum antibiotics to treat infections that could be treated with the first line of drugs a few decades ago. The other aspect of the problem of antimicrobial resistance is the lack of new treatment options. Many of the current antibiotics are chemical modifications of ones that are known to work. Designing completely novel molecules, with antibiotic activity, synthetically, has not been very successful (2). In this rather bleak situation a new study brings a new ray of hope. In this study the authors have enriched hitherto uncultivated bacteria from the soil (3).

 

What is so special about this?

A very small percentage of bacteria in the soil can actually be grown in the laboratory (4). The development of tools/methods to grow more bacteria opens up a window for isolating new compounds with potential antimicrobial activities that these bacteria may be producing to their advantage in the complex niche of the soil. In this study, the authors diluted soil sample to contain single cells and grew them in special chambers embedded in the soil which allow for nutrient exchange with the soil. Earlier studies have shown this method to recover 50% of the bacteria from the soil. Previous studies have also demonstrated that once isolated, many of these bacteria can be grown in the lab.

 

How was this antibiotic found?

After screening over 10000 bacterial isolates in this way Ling et al., identified a new species Eleftheria terrae that produced a potent compound against Staphylococcus aureus (S. aureus can cause infections especially in hospitals, it also notorious for acquiring resistance to commonly used antibiotics). They examined this compound in greater detail, asking questions like- what does it look like (chemical structure)? How is it made in the bacteria (biosynthetic pathway)? To answer these questions, they undertook detailed chemical analysis by NMR and Marfeys’ structure analysis. The active compound which the authors named Teixobactin, was found to be a uricylated oligopeptide (the details of the structure are provided in the paper). They analyzed and were able to predict the pathway by which the E. terrae makes this antibiotic. The compound was found to be completely novel.

 

How does it work?

How does this antibiotic kill the target bacteria? The first clue was that it was more effective against gram positive that gram negative bacteria. These two classes  (distinguished on the basis of their appearance after staining them with dyes) of bacteria differ in the number and nature of protective coverings around them. Gram positive bacteria have a thick cell wall around them, whereas gram negative bacteria have a thin cell wall but also have an additional outer membrane. The cell wall is made up of repeating units of modified sugars known as peptidoglycan and is essential for structural integrity of bacteria. A breach in this structure would lead to the bacterium’s death. Interestingly, Teixobactin is not active against gram negative bacteria which have an outer membrane. However, a strain of E coli (a gram negative bacteria) with defects in the outer membrane is susceptible to this antibiotic. These data suggest that Teixobactin needs to have access to the bacterial cell wall for its activity. Consistent with this idea, they find that Teixobactin binds specifically to precursors of peptidoglycan and does not allow their incorporation into the cell wall. Instead of targeting the enzymes that carry out cell wall synthesis, Teixobactin, like the potent antibiotic, Vancomycin, interacts with structural components of the cell wall itself. It seems to target multiple precursors and bacteria die not only from lack of cell wall but also from accumulation of toxic intermediates of cell wall synthesis.

 

Does it work against pathogens?

They then asked, how effective is this antibiotic against common pathogens? Teixobactin was found to have potent activity against Staphylococcus aureus (which can cause disease under certain circumstances), Clostridium difficile (causes colitis) and Bacillus anthracis (anthrax). It also had good activity against hard to treat microorganisms like Mycobacterium tuberculosis (Tuberculosis) and enteroccocci (are intrinsically antibiotic resistant, causally associated with urinary tract infection among others). All of this is good news, however, how do we know that once in use, bacteria will not become resistant to Teixobactin? One way to answer this question is to subject bacteria to low levels of the antibiotic for prolonged period and then test if they still respond to it. Fortunately, no resistance to Teixobactin  emerged in either M tuberculosis (M Tuberculosis is notorious for acquiring resistance to multiple drugs) (6) or S aureus. Suggesting that resistance will probably be slow to evolve  against this antibiotic. The next step then was to assess if it was toxic to animal cells and/ or effective when used in animals.

The compound was found to be eminently suited for being used as a drug in animals. It was not toxic to mammalian cells, was active even in the presence of serum and was stable in  blood. Moreover, Teixobactin seems to have no carcinogenic properties. In mouse models of septicemia and pneumonia, mice treated with Teixobactin survived and responded well to  treatment. This makes Teixobactin a remarkable candidate for further studies with the  possibility of clinical trials in humans.

 

What does this finding mean?

The approach used to isolate and characterize Teixobactin is novel and paves way for the identification and characterization of many such compounds. We think that this may well be the beginning of a mining exercise where we explore more antimicrobials from the soil. It remains to be seen if Teixobactin can actually be used in humans, it is unclear how long that will take or last. At the very least, Teixobactin offers a tempting glimpse of what’s hidden in the soil and gives us a better appreciation for the microbial community we nonchalantly read upon.

Why are we more likely to get a cold in cold weather?

Summary

We don’t really know, what we do know is that some cold causing viruses grow better at lower temperatures. Rhinoviruses, one of the most common causes of the common cold, display a temperature dependent growth pattern. They were shown to grow better at cooler temperatures (33°C­ – 35°C)​ (1,2) like that in the upper respiratory tract than at the core body temperature of 37°C. Scientists hit a wall when they looked for a reason for this by analyzing the viruses themselves. Entry of the virus inside the cell, for instance, was not affected at cooler temperatures (33°C- 35°C). The failure to find a convincing mechanism such as a temperature dependent viral enzyme or gene product was puzzling until recently, when scientists turned the table and started looking at the virus infected cell rather than the virus itself.

More about the study

A study published in the Proceedings of the National Academy of Science (PNAS) looks at cellular defense mechanisms and anti­viral responses that come into play when Rhinoviruses infect cells (3). ​

​The authors of this study compared the response to infection at the warmer body temperatures (37°C) and at cooler temperatures as found in the nasal cavity (33°C).

In what we think is the first step to understanding the role of the host (human) in this process, this study used a mouse adapted strain of the virus. They generated this strain by growing the virus for many generations in mouse cells. Subsequently the virus acquired mutations, adapted and was able to infect mouse airway cells. Foxman et al., isolated mouse airway epithelial cells and used the adapted strain to infect these cells in the laboratory. In order to test the temperature sensitivity, they carried out infection experiments at 33°C (cooler) and 37°C (warmer) temperatures. They observed the expected decrease in number of viral particles (titers*) starting from 7 hours after infection at 37°C but not at 33°C, confirming that temperature does impact viral numbers. The changes observed were in fact in the infected cells which showed a lower antiviral response to the virus at cooler temperature.

When a cell gets infected by a virus, it puts out a signal saying “I am infected” by secreting molecules such as interferons and this is critical for mounting an antiviral response​ (4)​. The study by Foxman et al., shows that cells infected at cooler temperatures have lower expression of molecules critical to the anti­viral response. The authors artificially activated a defense pathway (The RLR pathway) which results in interferon production, and show that this pathway has a lower response at 33°C than at 37°C. In other words there is lower production of interferons, at cooler temperatures. Further, by genetically mutating either molecules of this pathway or a receptor of interferon in mouse airway cells, the authors find an increase in viral titers even at warmer temperature (37°C).

These data suggest the cells may be able to ward off a Rhinovirus infection at warmer temperatures (37°C) due to a robust anti­viral response resulting in the production of interferons. In the nasal epithelium which is in constant contact with the outside air, the temperature of the cells is likely to be low enough for Rhinovirus to get away with a successful infection. Rhinoviruses of course are one of many agents that cause cold and it is not known if other cold viruses are similarly checked at higher temperatures. It is likely that they use a repertoire of counter ­strategies to the host defense response, some aspects of which may be temperature dependent. The experiments in this study have been conducted on mouse cells grown in the lab. It remains to be seen if this holds true within living animals and whether it can be extended to human- ­cold virus interactions.

An interview with Dr. Ellen Foxman

Q. How would you place this work in context of the unanswered questions in the field?

Question #1: Why do Rhinoviruses grow better at nasal cavity temperature than at lung temperature? It has been known since the 1960s that most Rhinovirus strains replicate poorly at body temperature (37°C) and better at slightly cooler temperatures (33 – 35°C) such as the temperatures found in the nasal cavity. However, the reason for this was not known. In our study, we observed that Rhinovirus ­infected cells fight back against infection more at 37°C than at 33°C—in other words, the immune response triggered by the virus within infected is more robust at 37°C, and this is an important mechanism suppressing growth of the virus at 37°C. b. Question #2: Does temperature affect the immune response to diverse pathogens, or just the immune response to Rhinovirus? We found that two cardinal signaling pathways involved in immune defense were more active at body temperature than at nasal temperature: RIG­I like receptor signaling and Type I interferon receptor signaling. Since these pathways help defend us against in many viral different infections, our results raise the possibility that cool temperature also provides an advantage to viruses other than Rhinovirus. For example, many respiratory viruses cause colds more often than they cause lung infections; perhaps this is a reason why. That being said, it will be important to directly test other viruses, since viruses are tricky and many viruses have evolved ways to interfere with the immune responses we studied.

Q. How do you plan to take this study forward? What are the strengths and limitations of your model system?

The strength of our study was that we used a very well­ defined experimental system in which we could change one variable at a time to identify the immune system machinery needed to fight Rhinovirus within infected cells and to examine the effect of changing the temperature without changing anything else. Specifically, we used mouse primary airway cells grown in the laboratory. This way, we could compare cells from normal mice with cells from mice that differed by only one gene within the immune system. This allowed us to pinpoint which molecules within the immune system were important for defense against Rhinovirus, and which defenses were (or weren’t) affected by temperature. Also, by culturing cells in the lab, we were able to place them in incubators with controlled temperatures to clearly assess the effect of temperature without other confounding factors. b. Limitations/next steps: Although in general mice have been a good animal model for the human immune system, mice aren’t humans, and the next step in the study will be to examine in more detail how these mechanisms work within the human airway.

Q. Rhinoviruses are known to sometimes infect the lower respiratory tract (5) ​ what do you think is going on there?

One possibility is that the immune mechanisms required to block Rhinovirus infection don’t work as well in people who tend to have lung symptoms with Rhinovirus infection—for example, people with asthma. In our study, we found that if we used mouse cells lacking the necessary immune system machinery to block Rhinovirus infection, the virus could grow quite well at 37°C. There is some evidence that in airway cells from people with asthma, this machinery may not function properly; if this is the case, this might be what permits the virus to thrive at the warmer temperatures of the lung.

Q. Do you think alternating the temperatures (for example in a real world scenario inhaling steam or hot water gargling versus eating an ice cream) impact the success of Rhinovirus infection? In other words you perform the entire infection at one temperature, are there shorter time windows within which a temperature change would positively impact disease outcomes (for example gargling every morning or drinking hot water after eating an ice cream?)​ ?

We did do some temperature shift experiments (see Figure S3 in the paper.) We found that the level of the immune response tracked with the temperature of the cells during the time window when the virus was actively replicating; the temperature before the infection didn’t matter much. I would speculate that some exposure of infected cells to warm temperature at any point when the virus is actively replicating might be beneficial.

Q. What kind of experiments would you need to conduct to suggest to people that using different methods to increase the temperature of the upper respiratory tract – like drinking hot water may help fight cold? Have they been done?

The best way to prove that an intervention works is to directly test it, as you are suggesting. In this case, the best experiment would be to expose a group of volunteers to a fixed dose of Rhinovirus, and then place half of them on a well ­defined hot water drinking program (perhaps the other half could drink only cold water. If hot water program were effective, you would expect to see fewer colds develop in the hot water group than in the other group. This is a difficult study to perform: ideally, you would want to test a group of people who are identical in every way (genetics, behavior, environment, history of exposure to infections, etc.) except for the hot water drinking. In reality, this is quite hard to do, since every person is different! However, it might be possible to see an effect by studying a large group of people, especially if hot water drinking had a big impact (rather than a small effect) on whether or not colds developed after exposure to Rhinovirus. These types of studies can be very informative, but also can be complicated to interpret due to the inability to control all of the variables that may affect the outcome you are measuring (in this case, development of cold symptoms.) b. I do not know of any study considered to be definitive on this subject. However, I did a literature search and found a number of studies that have looked at the effect of hot liquids or steam inhalation on common cold symptoms, and I did find a number of these. You can read a few of these to get a feeling for the strengths and limitations. For example: i. Sanu and Eccles, 2008: This study tested the effect of hot liquid drinking on cold and flu symptoms in subjects who were recruited when they already had symptoms—the pathogen causing the symptoms is unknown. ii. Singh and Singh, 2013, meta-analysis of multiple studies looking at steam inhalation and common cold symptoms.

Q. You have emphasized on cell autonomous response to viral infections, what about the other aspects of the immune response? Do you think they could also contribute to temperature sensitivity?

We only looked at the cell autonomous immune responses in this study, and these were solely responsible for the temperature ­dependent blockade of Rhinovirus in our experiments. In the body, where many cell types are present, the responses we examined (RIG­I like receptor signaling and the Type I interferon response) can profoundly affect nearby and even distant cells through the action of secreted chemicals (cytokines). In this way, the phenomena we observed could also contribute to the temperature ­dependence of other immune responses; however, as yet we have no evidence for this.