Biological replicates

Biological replicates

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

We are currently doing an screening where supernatans from a bacteria genus are tested againts a pathogenic bacteria cell culture in a 96-well plate where OD is measured each 2 hours.

Each supernatant from a strain has 3 replicates and the control phatogenic bacteria cell culture has about 11 replicates inside the plate.

Once each experiment is finished cell growth curves are modelated and 3 growth parameters are determined (mu, lambda and max absorbance) from each supernatant tested and from the control. Then a Mann-withney U test is performed for each growth parameter from each strain againts corresponding control parameter to check if there's some kind of possible inhibition.

The strains which has some inhibition efect againts this pathogenic bacteria will be tested again in further experiments to confirm the possible positive.

I have a doubt about if performing statistical test to compare supernatants from each plate againts its corresponding control is correct considering that the possible candidates will be again tested in further experiments to confirm its inhibition. Since there's loads of strains to test, repeating each single experiment more than 3 times would take too long.

What it is your oipinion? Is it correct to do statistics for single plate (technical replicates) to gather possible positive for a later confirmation (biological replicates)?

Thank you

Are biological replicates strictly needed in this case?

Replicates and repeats in designed experiments

Replicates are multiple experimental runs with the same factor settings (levels). Replicates are subject to the same sources of variability, independently of each other. You can replicate combinations of factor levels, groups of factor level combinations, or entire designs.

For example, if you have three factors with two levels each and you test all combinations of factor levels (full factorial design), one replicate of the entire design would have 8 runs (2 3 ). You can choose to do the design one time or have multiple replicates.

  • Screening designs to reduce a large set of factors usually don't use multiple replicates.
  • If you are trying to create a prediction model, multiple replicates can increase the precision of your model.
  • If you have more data, you might be able to detect smaller effects or have greater power to detect an effect of fixed size.
  • Your resources can dictate the number of replicates you can run. For example, if your experiment is extremely costly, you might be able to run it only one time.

Evaluating strategies to normalise biological replicates of Western blot data

Western blot data are widely used in quantitative applications such as statistical testing and mathematical modelling. To ensure accurate quantitation and comparability between experiments, Western blot replicates must be normalised, but it is unclear how the available methods affect statistical properties of the data. Here we evaluate three commonly used normalisation strategies: (i) by fixed normalisation point or control (ii) by sum of all data points in a replicate and (iii) by optimal alignment of the replicates. We consider how these different strategies affect the coefficient of variation (CV) and the results of hypothesis testing with the normalised data. Normalisation by fixed point tends to increase the mean CV of normalised data in a manner that naturally depends on the choice of the normalisation point. Thus, in the context of hypothesis testing, normalisation by fixed point reduces false positives and increases false negatives. Analysis of published experimental data shows that choosing normalisation points with low quantified intensities results in a high normalised data CV and should thus be avoided. Normalisation by sum or by optimal alignment redistributes the raw data uncertainty in a mean-dependent manner, reducing the CV of high intensity points and increasing the CV of low intensity points. This causes the effect of normalisations by sum or optimal alignment on hypothesis testing to depend on the mean of the data tested for high intensity points, false positives are increased and false negatives are decreased, while for low intensity points, false positives are decreased and false negatives are increased. These results will aid users of Western blotting to choose a suitable normalisation strategy and also understand the implications of this normalisation for subsequent hypothesis testing.

Conflict of interest statement

Competing Interests: The authors have declared that no competing interests exist.


Figure 1. Normalisations of Western blot replicates…

Figure 1. Normalisations of Western blot replicates in the literature.

We divide the normalisations found…

Figure 2. Signal linearity obtained by different…

Figure 2. Signal linearity obtained by different Western blot detection systems.

Representative experiments of Western…

Representative experiments of Western blots containing 2-fold serial dilution of BSA. Shown are the representative results from 3 independent experiments. BSA was detected by (A,C) ECL with X-ray film and (B,D) ECL with CCD imager. Blue squares indicate data points that are linear, while red triangles indicate data points outside the linear range of detection. To highlight linear and non-linear data we use linear trend lines, reporting the coefficient of determination . In (A,B) data are in log-log scale to improve visualisation.

Figure 3. Effect of the normalisation on…

Figure 3. Effect of the normalisation on the CV of the normalised data.

Figure 4. Correlation between the intensity of…

Figure 4. Correlation between the intensity of the normalisation points and the CV of the…

Figure 5. Effects of normalisation on false…

Figure 5. Effects of normalisation on false positives and false negatives when applying t-test for…

Types of Replicates: Technical vs. Biological

Both biological and technical replicates are key to generating accurate, reliable results and help address different questions about data reproducibility.

Technical Replicates

Technical replicates are repeated measurements of the same sample that demonstrate the variability of the protocol. Technical replicates are important because they address the reproducibility of the assay or technique however, they do not address the reproducibility of the effect or event that you are studying. Rather, they indicate whether your measurements are scientifically robust or noisy and how large the measured effect must be in order to stand out above the background noise.[2] Examples may include loading multiple lanes with each sample on the same blot, running multiple blots in parallel, or repeating the blot with the same samples on different days.

Figure 1. Technical replicates help identify variation in technique. For example, lysate derived from a mouse that is loaded three times (A1, A2, A3) on each membrane then run and measured independently will help identify variation in technique.

When technical replicates are highly variable, it is more difficult to separate the observed effect from the assay variation. You may need to identify and reduce sources of error in your protocol to increase the precision of your assay. Technical replicates do not address the biological relevance of the results.

Biological Replicates

Biological replicates are parallel measurements of biologically distinct samples that capture random biological variation, which can be a subject of study or a source of noise itself.[3] Biological replicates are important because they address how widely your experimental results can be generalized. They indicate if an experimental effect is sustainable under a different set of biological variables.

For example, common biological replicates include repeating a particular assay with independently generated samples or samples derived from various cell types, tissue types, or organisms to see if similar results can be observed. Examples include analysis of samples from multiple mice rather than a single mouse, or from multiple batches of independently cultured and treated cells.

Figure 2. Biological replicates derived from independent samples help capture random biological variation. For example, three biological replicates (A, B, and C) are collected from three independent mice. Each of these biological replicates was run in three technical replicates (A1, A2, A3 B1, B2, B3 C1, C2, C3) in one Western blot assay.

To demonstrate the same effect in a different experimental context, the experiment might be repeated in multiple cell lines, in related cell types or tissues, or with other biological systems. An appropriate replication strategy should be developed for each experimental context. Several recent papers discuss considerations for choosing technical and biological replicates.[1,2,3]

For a helpful guide in choosing and incorporating the right technical and biological replicates for your experiment, check out the Quantitative Western Blot Analysis with Replicate Samples protocol.

Additional Resources to Help You Get the Best Data

LI-COR has additional resources that you can use as you plan your quantitative Western blot strategy.

Stupid question about replicates.

I am currently replicating some key data that I produced in different cell lines. With my initial data I created a stable gene knockdown, so everything was simple to figure out, but in replicating the data I am using a transient siRNA knockdown. If I perform a knockdown, and then split the cells from that experiment into 3 replicates of the functional assay I am running do I have 1 biological replicate, because I only performed one knockdown, or do I have 3 biological replicates, because the assay was performed with 3 different groups of the same type of cell?

This is probably a simple question, but for some reason it is tripping me up. Thanks in advance for your help!

Edit: Thanks for the input, everyone!

I think if you do the knockdown before splitting your cells then you have three technical replicates for your functional assay. If you split the cells and then independently do the knockdown in each of the three plates you have three biological replicates. Or at least that's how I kind of see these things.

See, that was my original thinking as well, but if you create a cell line with a stable knockdown and use it to do three replicates of an assay then you have three biological replicates and I am having trouble trying to spot the difference here.

Given the setup, I’m going to agree and say that you have three technical replicates (one biological replicate). The general rule of thumb here is to ask yourself: if I were to start this experiment over again for another biological replicate, where would I start? With a stably-transfected cell line, you would not begin the experiment by performing the transfection again. However with transient siRNA gene knockdown, you would begin the experiment again by performing the knockdown. Therefore, the knockdown is part of the biological replicate of the experiment. If you’re doing only one siRNA knockdown, then you are dealing with only one biological replicate of the experiment, regardless of what you do with the cells afterwards.

Biological vs technical replicates

My brain is really being stretched for this one. I'm questioning whether or not the following are three biological replicates or just three technical replicates from the same biological source.

So I thawed a vial of cells and passaged them twice. With each passage I do a 1:2 dilution, meaning I have four flasks at the same passage. Now, here's where I'm stuck: if I take one flask and seed them onto a well plate (for let's say a drug treatment with three replicates), and I do the same thing for all of other flasks, would that mean I have four biological replicates?

If it isn't a biological replicate, should I just passage one of the flasks so it's a whole new passage and repeat. (I know this is basic biology but sometimes you just have a brain fart).

Biological replicate = repeating the experiment on a new day, with new materials (ie. How repeatable is the experiment?). If you would say “I’ve run this experiment 3 times”, you have three biological replicates.

Technical replicate = having multiple samples with same conditions within one experimental run (ie. What is the range for this phenotype?). If you would say “I ran this experiment with n = 3 in each condition”, you have technical replicates.

The different vials of passaged cells aren’t in and of themselves biological or technical replicates yet. It depends whether you work with them together or separately. So, if you only seeded a well plate with one vial and are going to conduct your experiment, then the next day (week, month, whatever) you seed another well plate and repeat the experiment again but separately, you have biological replicates. If you seeded them at the same time and performed the experiment on them at the same time, they become technical replicates.

Biological replicates - Biology

Yesterday I told you about some general principles of clinical trials, and how it&rsquos really important that they&rsquore controlled, meaning that if you have 2 groups of things you&rsquore comparing (e.g. treatment vs. no treatment) you want to make sure (as much as possible) that the only thing different between the groups is the treatment. I&rsquom not involved in clinical trials, but the experiments I carry out in the lab (that is the experiments I carried out in the lab and hope to be able to return to soon&hellip) are each like mini (more easily controlled) trials &ndash a lot of the same principles about dependent variables, independent variables, replicates, etc. still hold. So today I thought I&rsquod tell you some more about experimental design.

In this post, I hope to take you inside the mind of a scientist as they plan out an experiment. With any experiment, you have to make compromises to get the type of information that is most important for you. Many of these &ldquocompromises&rdquo involve controlling variables, a crucial aspect of the scientific process.

A Thought Experiment: To illustrate some points, let&rsquos start with a &ldquosimple&rdquo experiment, one you might have done for a science fair. Say you want to test the effects of water and light exposure on plant growth. Seems pretty straightforward, right? Water and light are the &ldquoindependent variables&rdquo you want to test and growth is the &ldquodependent variable&rdquo that is &ldquodependent&rdquo on your independent variables. So you take some seeds, give them different amounts of water and light, and measure their growth. But wait, what do you mean by growth? Increases in height, leaf size, circumference, total mass? In this simple experiment, you could collect them all, but this often not practical. Once you choose your experimental &ldquoread out&rdquo you need to determine when to take the measurements &ndash this depends on what you want to know. Are you interested in changes in the rate of growth? If so, you&rsquod want to take a series of measurements at fixed timepoints. If you don&rsquot care about rate, just overall growth, you could just take a single measurement at a single timepoint.

For simplicity&rsquos sake for this thought exercise, let&rsquos say you decide to measure plant height after two weeks. Now you have to decide how you want to change your variables. If you change the amounts of water and light at the same time, any changes in growth you see are a combination of effects of changing water and changing light and you won&rsquot know how much of the change you see is due to which factor. If you want to get information about the individual contribution of one variable, you need to hold the other variable constant &ndash so you run two parallel sets of experiments. In one, you give each plant the same amount of water but different amounts of light and vice versa for the other set.

If all variables but the one you&rsquore interested in are kept constant, then any differences in the dependent variable (growth in this case) are taken to be due to the independent variable you changed. If every other variable really were kept constant, this would be true, but this theoretical perfectly controlled system doesn&rsquot exist. There is always some variability in your variables! For example, there could be genetic differences in the seeds, slight differences in soil composition, differences in distance to the light, etc. The difficulty of controlling experimental variables is especially pronounced in biology because living organisms are incredibly complex.

Replicates: It is impossible to control for every variable &ndash to account for this, scientists include replicates. With replicates, you hope that although each replicate will differ slightly, these differences will &ldquobuffer&rdquo each other, similar to how all the colors in the rainbow &ldquocancel each other out&rdquo to make white. There are two main types of replicates that are both important:

Technical replicates are when you test the same sample multiple times to buffer out inconsistencies in measuring. In our plant case, this would mean measuring each plant several times &ndash Were you measuring from exactly the same starting point? Did you correctly count the number of lines on the ruler?

Biological replicates are when you test different samples that are &ldquoidentical&rdquo in all aspects but their source. In our plant case, this would mean including multiple seeds in each treatment group. Since it&rsquos just a theoretical experiment, we could include as many seeds as we want, but in real experiments there are practical limitations (e.g. availability and cost of samples, amount of time and energy needed to collect the data).

More practical labby advice on replicates later&hellip

In order to detect effects of your treatment, you need to make sure that differences between treatment groups are bigger than differences between individual samples within those treatment groups, and there are statistical tests scientists use to estimate how likely it is that the effects are due to the treatment.

Over-control? Controlling variables is crucial, but even if you could perfectly control every variable but the one you are interested in, you would lose important information in doing so. In science we talk of &ldquonon-additive effects&rdquo &ndash where the sum of the effect of individual variables on their own is less than their combined effect because the variables themselves are interdependent. Say you wanted to determine the optimal amount of light and water for plant growth &ndash you change these variables independently as we outlined above, and determine that the optimal amount of light is some value, A, and the optimal amount of water is B. This doesn&rsquot necessarily mean that the optimal growth conditions are A + B. It could be that light has a bigger effect at a certain water level, but you wouldn&rsquot see that effect if you only tested at a different water level. It also could be that one of the &ldquocontrolled variables&rdquo such as temperature has a similar effect, with the effects of light or water being more pronounced at certain temperatures. Obviously, it&rsquos impossible to test each combination of variables, so scientists must make compromises when designing their experiments.

A more &ldquoreal-world&rdquo example. To show how these concepts play out in a more realistic scenario, let&rsquos consider pharmaceutical drug development. Many early experiments are performed on cells in a dish (cell culture), which allows for moderate control over variables while still working in a cellular context. If a scientist wants to test the effects of a drug on human cells, they could take cells and plate them in 2 dishes &ndash add the drug to one dish and only the delivery vehicle (the liquid the drug is dissolved in) to the second dish as a negative control. As we saw above, technical and biological variability could affect the results so the scientist would actually want to set up a number of dishes, not just one of each.

Say the scientist sees that the drug has a desired effect &ndash it&rsquos not quite time to celebrate. To make sure that the observed results weren&rsquot specific to that cell preparation, they would also want to repeat the experiment on a different date with &ldquonew&rdquo cells. Next, they will likely test the drug on a different cell line (the initial source of the immortalized cells is different, not just the &ldquobatch&rdquo of those cells) to make sure that the effects aren&rsquot cell-line specific.

If the drug has the same effect on multiple cell lines, it is more likely to have that effect in the body (in vivo), but this is far from guaranteed because the life of a cell in a dish is much different from the life of a cell in the body, where there are complex dynamics between cells and their surrounding environment, not to mention potential &ldquooff-target&rdquo effects that could cause dangerous complications. This is why further testing of the drug is required to determine 1) is it safe and 2) does it work?

When it comes to testing drugs in people, controlling (and over-controlling) variables is often a point of contention. If you thought cells in a dish were inherently variable, complete human beings are all the more so! In order to control for some some this variability, there are often strict requirements for participation in drug trials. As we saw above, there are legitimate reasons for such control &ndash for example, if you test a drug in a patient who has an additional medical problem and that patient has a complication, you don&rsquot know if it&rsquos because of the drug alone or the preexisting condition, or the combination of the two. However, a problem often arises with regards to over-controlling variables. Tight control can lead a drug to be tested and approved on a population that isn&rsquot representative of the true patient population. The drug therefore might not be effective in most patients (and can even have adverse effects). As you can see, scientists must make difficult and careful decisions when designing their experiments.

Some more about replicates: To review, TECHNICAL REPLICATES test the same sample multiple times to account for variation in measuring whereas BIOLOGICAL REPLICATES test different samples. And both of these are different from independent experiments, where you test different samples on different days with fresh setups, etc. Independent experiments can account for things like &ldquothere was something in the water&rdquo or more commonly when it comes to biochemistry there was something left out of the water! (it&rsquos really easy to accidentally forget to add things so you want to develop systems like moving tubes to a different rack after you add them, or checking off things you&rsquove added on a piece of paper (but word of caution, it takes a while to develop habits so in the beginning days you can confuse yourself more because you might add something but forget to cross it off or move the tube so then you think you haven&rsquot added it when you have!

Each of these types of &ldquodouble-checking&rdquo have value but in different ways. To help illustrate the difference, let&rsquos look at an example. Types of replicates are often explained in terms of patients or lab rats &ndash e.g. say you treat 10 people with a drug that&rsquos supposed to lower blood pressure and then you measure the treated people&rsquos blood pressure. If you measure the same person&rsquos blood pressure 10 times (maybe you thought the machines was acting weird or something &ndash or it took the person a while to relax) those would be technical replicates (they tell you about how reliable the measuring is and variation within the sample). If you measure each person&rsquos blood pressure those would be biological replicates (these will tell you about differences between how people respond). And if you repeated the experiment with a different group of people, that would be an independent experiment (this tells you about how representative of the wider population that first group was)

Say 1 of the people responded really well to the treatment but the other 9 didn&rsquot. If you measured that 1 person&rsquos blood pressure 5 times but everyone else&rsquos just once and then you took the average it&rsquod seem like the drug worked a lot better than it actually did. So instead when you average, you average the averages of the technical replicates (so you&rsquod take the average of that strong-responder so it doesn&rsquot skew the results). So, if you have 10 people, your &ldquon&rdquo is 10 regardless of how many measurements you make.

I don&rsquot work with people (well, I do work with people but I don&rsquot do research on them) or animals &ndash but I do work with a lot of replicates. Whether they&rsquore different protein preps (biological replicates) or tests of the same protein prep but repeated multiple times to account for things like pipetting differences, etc. (technical replicates). And if I repeat the experiment on a different day with a different preps that would be an independent experiment.

  • e.g. same protein prep, repeat experiment in parallel or take multiple samples from the same reaction
  • variation in measurement &ndash consistent pipetting (was there an air bubble, did you forget to add something)
  • variation in sample &ndash was the sample mixed well? did you happen to take a pipetful that was super full of stuff?
  • more technical replicates -> better estimation of the mean but does not change sample size
  • how representative is your sample?
  • different people or animal subjects, different cell lines, different protein preps
  • are differences you saw in one sample really real? Are they just background variation

Many Approaches: Another thing to take into account is that in biochemistry there are often several approaches to a question. Say I want to figure out if 2 proteins interact &ndash should I use an EMSA, an IP, analytical chromatography? (not gonna explain these techniques here, but you can find more info about them on my blog). One isn&rsquot inherently &ldquobetter&rdquo or &ldquoworse&rdquo they just give you different information. Each experiment, in all areas of science, has its strengths and weaknesses. It is important that scientists explore their options, think critically, and choose the experiment that will answer the question they&rsquore looking for. Ideally, scientific conclusions should be drawn from multiple lines of evidence, multiple experiment types. Similarly to how a large number of biological replicates helps &ldquobuffer&rdquo variation, using multiple experiment types allows the strengths of one technique to complement the weaknesses of another.

In addition to careful experimental planning, it is important that scientists recognize the weaknesses and limitations of the methods they choose to use and convey these caveats to their audience. If you are the audience, some things to look for are: replication (technical but especially biological) and multiple lines of evidence (different types of experiments used).

This post isn&rsquot meant to dampen your enthusiasm for science, but rather to help you think like a scientist and understand why we do the things we do. There are many ways to answer similar scientific questions and the particular experiment you choose depends on many factors (both practical and theoretical). Like in everything, there is variability among scientists and the techniques we choose, but this variability doesn&rsquot make the pursuit of science less valid.

You could first look at the degree of correlation between the two replicates - what proportion of peaks are shared between the two, versus peaks found in one sample only? This will give an idea of how repeatable the analysis is, and how many peaks are variable between samples or due to artefacts of the method.

When it comes to interpreting the biology behind the binding patterns, you are probably going to want to only interpret peaks which you find in both replicates.

This question is somewhat generic, so a generic answer is that ENCODE has a Transcription Factor ChIP-seq Data Standards and Processing page that can give you a useful starting point.

For TF ChIP-seq data with replicates, the Irreproducible Discovery Rate (IDR) method helps leverage replicates to produce higher confidence peak calls, producing both "optimal" and "conservative" peaks, defined by some IDR threshold. Here's ENCODE's description of the method:

A statistical procedure that operates on the replicated peak set and compares consistency of ranks of these peaks in individual replicate/pseudoreplicate peak sets. Peaks with high rank consistency are retained. IDR can operate on peaks across a pair of true replicates resulting in a “conservative” output peak set, or across a pair of pseudoreplicates resulting in an “optimal“ output peak set. Peaks in the conservative peak set can be interpreted as high confidence peaks, representing reproducible events across true biological replicates and accounting for true biological and technical noise. Peaks in the optimal set can be interpreted as high-confidence peaks, representing reproducible events and accounting for read sampling noise. The optimal set is more sensitive, especially when one of the replicates has lower data quality than the other.

The Python code to run IDR on peak sets can be found on ENCODE's GitHub Chip-seq pipeline repo.

But Do Watches Replicate? Addressing a Logical Challenge to the Watchmaker Argument

Were things better in the past than they are today? It depends who you ask.

Without question, there are some things that were better in years gone by. And, clearly, there are some historical attitudes and customs that, today, we find hard to believe our ancestors considered to be an acceptable part of daily life.

It isn’t just attitudes and customs that change over time. Ideas change, too—some for the better, some for the worst. Consider the way doing science has evolved, particularly the study of biological systems. Was the way we approached the study of biological systems better in the past than it is today?

As an old-earth creationist and intelligent design proponent, I think the approach biologists took in the past was better than today for one simple reason. Prior to Darwin, teleology was central to biology. In the late 1700s and early to mid-1800s, life scientists viewed biological systems as the product of a Mind. Consequently, design was front and center in biology.

As part of the Darwinian revolution, teleology was cast aside. Mechanism replaced agency and design was no longer part of the construct of biology. Instead of reflecting the purposeful design of a Mind, biological systems were now viewed as the outworking of unguided evolutionary mechanisms. For many people in today’s scientific community, biology is better for it.

Prior to Darwin, the ideas shaped by thinkers (such as William Paley) and biologists (such as Sir Richard Owen) took center stage. Today, their ideas have been abandoned and are often lampooned.

But, advances in my areas of expertise (biochemistry and origins-of-life research) justify a return to the design hypothesis, indicating that there may well be a role for teleology in biology. In fact, as I argue in my book The Cell’s Design, the latest insights into the structure and function of biomolecules bring us full circle to the ideas of William Paley (1743-1805), revitalizing his Watchmaker argument for God’s existence.

In my view, many examples of molecular-level biomachinery stand as strict analogs to human-made machinery in terms of architecture, operation, and assembly. The biomachines found in the cell’s interior reveal a diversity of form and function that mirrors the diversity of designs produced by human engineers. The one-to-one relationship between the parts of man-made machines and the molecular components of biomachines is startling (e.g., the flagellum’s hook). I believe Paley’s case continues to gain strength as biochemists continue to discover new examples of biomolecular machines.

The Skeptics’ Challenge

Despite the powerful analogy that exists between machines produced by human designers and biomolecular machines, many skeptics continue to challenge the revitalized watchmaker argument on logical grounds by arguing in the same vein as David Hume. 1 These skeptics assert that significant and fundamental differences exist between biomachines and human creations.

In a recent interaction on Twitter, a skeptic raised just such an objection. Here is what he wrote:

“Do [objects and machines designed by humans] replicate with heritable variation? Bad analogy, category mistake. Same one Paley made with his watch on the heath centuries ago.”

In other words, biological systems replicate, whereas devices and artefacts made by human beings don’t. This difference is fundamental. Such a dissimilarity is so significant that it undermines the analogy between biological systems (in general) and biomolecular machines (specifically) and human designs, invalidating the conclusion that life must stem from a Mind.

This is not the first time I have encountered this objection. Still, I don’t find it compelling because it fails to take into account manmade machines that do, indeed, replicate.

Von Neumann’s Universal Self-Constructor

In the 1940s, mathematician, physicist, and computer scientist John von Neumann (1903–1957) designed a hypothetical machine called a universal constructor. This machine is a conceptual apparatus that can take materials from the environment and build any machine, including itself. The universal constructor requires instructions to build the desired machines and to build itself. It also requires a supervisory system that can switch back and forth between using the instructions to build other machines and copying the instructions prior to the replication of the universal constructor.

Von Neumann’s universal constructor is a conceptual apparatus, but today researchers are actively trying to design and build self-replicating machines. 2 Much work needs to be done before self-replicating machines are a reality. Nevertheless, one day machines will be able to reproduce, making copies of themselves. To put it another way, reproduction isn’t necessarily a quality that distinguishes machines from biological systems.

It is interesting to me that a description of von Neumann’s universal constructor bears remarkable similarity to a description of a cell. In fact, in the context of the origin-of-life problem, astrobiologists Paul Davies and Sara Imari Walker noted the analogy between the cell’s information systems and von Neumann’s universal constructor. 3 Davies and Walker think that this analogy is key to solving the origin-of-life problem. I would agree. However, Davies and Walker support an evolutionary origin of life, whereas I maintain that the analogy between cells and von Neumann’s universal constructor adds vigor to the revitalized Watchmaker argument and, in turn, the scientific case for a Creator.

In other words, the reproduction objection to the Watchmaker argument has little going for it. Self-replication is not the basis for viewing biomolecular machines as fundamentally dissimilar to machines created by human designers. Instead, self-replication stands as one more machine-like attribute of biochemical systems. It also highlights the sophistication of biological systems compared to systems produced by human designers. We are a far distance away from creating machines that are as sophisticated as the machines found inside the cell. Nevertheless, as we continue to move in that direction, I think the case for a Creator will become even more compelling.

Who knows? With insights such as these maybe one day we will return to the good old days of biology, when teleology was paramount.

Metabolomics Study Design – Replicates & Volumes

The table below shows general guidelines and our preferences for minimal sample quantities required for common sample types.

Sample Replicates

For samples that will be directly compared to each other, sample preparation and measurements should be performed at the same time. This optimizes the amount of information collected for each assay and measures molecules of interest using the same conditions.

Two examples of experimental design (dose-response and temporal) recommendations in regard to minimum controls and biological replicates for mass spectrometry analysis are shown below.

In addition, for each project CIT staff pools together samples used for quality control (equal volume amounts of each sample) and injects these for analysis prior to and after every 10 individual sample injections. These quality control samples are used to assess instrument variability, data quality and injection volume variations. The number of required technical replicates will vary based on experimental design to ensure data quality is high and variations in samples owing to instrumentation variation is minimized.

References and Resources

Guided Paper

Meselson, M. and Stahl, F.W. (1958). The replication of DNA in Escherichia coli. Proceedings of the National Academy of Sciences U.S.A., 44: 672–682.


  • Matthew Meselson’s letter to James Watson from November 8, 1957, describing the results of their experiments on DNA replication. Download.
  • Meselson, M., Stahl, F.W., and Vinograd, J. (1957). Equilibrium sedimentation of macromolecules in density gradients. Proceedings of the National Academy of Sciences U.S.A., 43: 581–588.

This paper describes the use of the centrifuge and density gradient to analyze biological molecules, a technique that was used in their 1958 paper but also very broadly used for many applications in biology. See also Dig Deeper 3 .

An outstanding resource for those wanting a detailed, accurate description of the Meselson–Stahl experiment.


  • White Board Video on the Semi-Conservative Model of DNA and the Meselson–Stahl Experiment by iBiology.

A nice 7:30 min video describing the Meselson–Stahl experiment and its conclusions.

This film documents the discovery of the structure and replication of DNA including interviews with James Watson who, along with Crick, proposed the double helix model of DNA.

This activity is often used in conjunction with the short film The Double Helix. It introduces students to Meselson and Stahl experiment and helps them understand the concepts generated via those experimental results.

This collection of resources from HHMI Biointeractive addresses many of the major concepts surrounding DNA and its production, reading, and replication.

Watch the video: ΝΕΑ ΠΑΡΑΛΑΒΗ! Power Health Liposomal Range (January 2023).