Test set: Difference between revisions

Jump to navigation Jump to search
1,154 bytes added ,  17 June 2008
no edit summary
(New page: The test set is used for ==How precise is the estimate of Rfree for a certain number of test set reflections?== This has been discussed in papers by Axel Brunger and by Ian Tickle. ==H...)
 
No edit summary
Line 1: Line 1:
The test set is used for  
The following is based on a CCP4BB discussion around June 17, 2008 entitled: "How many reflections for Rfree?"
 
First of all, the test set is that set of reflections put aside for unbiased calculation of statistical quantities, in particular R_free and sigmaA.
 
The need to find a good compromise for the size of the test set has been discussed by Axel Brunger in a "Methods in Enzymology" (1997) paper. He writes:
In all test calculations to date, the free R value has
been highly correlated with the phase accuracy of the atomic model. In
practice, about 5-10% of the observed diffraction data (chosen at random
from the unique reflections) become sequestered in the test set. The size
of the test set is a compromise between the desire to minimize statistical
fluctuations of the free R value and the need to avoid a deleterious effect
on the atomic model by omission of too much experimental data.
 
==How precise is the estimate of Rfree for a certain number of test set reflections?==
==How precise is the estimate of Rfree for a certain number of test set reflections?==
This has been discussed in papers by Axel Brunger and by Ian Tickle.
The estimate for the relative error of R_free is 1/sqrt(n), where n is the size of the test set. So if n is 1000, and the R_free is 31%, you would expect its relative error to be 31%/sqrt(1000), which is about 1%.
 
I believe this is from a paper of Ian Tickle (FIXME: reference).


==How many reflections do you need to get a good estimate of the sigmaA values (as a function of resolution) needed to calibrate the likelihood target?==
==How many reflections do you need to get a good estimate of the sigmaA values (as a function of resolution) needed to calibrate the likelihood target?==


Kevin Cowtan has published something about this in  
Randy Read's rule of thumb is this:  "My impression is that you gain relatively little by adding more reflections, once you have a total of about 1000 or at most 2000 in the cross-validation set. However, giving up more than 10% of the data is probably a bad idea, even if the sigmaA estimates are somewhat less accurate.  I've had reasonable results refining against data sets of 3000-5000 reflections, setting aside only 10% (i.e. 300-500 reflections) for cross-validation.
K.Cowtan (2005) J. Appl. Cryst. 38, 193-198. Likelihood weighting of partial structure factors using spline coefficients
http://journals.iucr.org/j/issues/2005/01/00/zm5022/zm5022.pdf
Essentially the result is that the number of reflections required varies with the level of error in the model (i.e. with sigmaa). For refinement close to convergence, one could use about 250 free reflections per sigmaA parameter (so 1500 would probably do).


However when dealing with a very poor initial model, or, for example, when using sigmaA in a density modification calculation, then it may be necessary to use all the reflections.  
So here's the recipe I would use, for what it's worth:
  <10000 reflections:        set aside 10%
    10000-20000 reflections:  set aside 1000 reflections
    20000-40000 reflections: set aside 5%
  >40000 reflections:        set aside 2000 reflections


Randy Read's rule of thumb is this:  My impression is that you gain relatively little by adding more reflections, once you have a total of about 1000 or at most 2000 in the cross-validation set.  However, giving up more than 10% of the data is probably a bad idea, even if the sigmaA estimates are somewhat less accurate.  I've had reasonable results refining against data sets of 3000-5000 reflections, setting aside only 10% (i.e. 300-500 reflections) for cross-validation.
I'm sure that with a bit of thought someone could come up with a smooth function that achieves something similar, but it seems adequate."


So here's the recipe I would use, for what it's worth:
  <10000 reflections:    set aside 10%
  10000-20000 reflections:  set aside 1000 reflections
  20000-40000 reflections:  set aside 5%
  >40000 reflections: set aside 2000 reflections


I'm sure that with a bit of thought someone could come up with a smooth function that achieves something similar, but it seems adequate.
More quantitative estimates can be found in
K.Cowtan (2005) J. Appl. Cryst. 38, 193-198. Likelihood weighting of partial structure factors using spline coefficients
http://journals.iucr.org/j/issues/2005/01/00/zm5022/zm5022.pdf
Kevin Cowtan summarizes: the result is that the number of reflections required varies with the level of error in the model (i.e. with sigmaa). For refinement close to convergence, one could use about 250 free reflections per sigmaA parameter (so 1500 would probably do). However when dealing with a very poor initial model, or, for example, when using sigmaA in a density modification calculation, then it may be necessary to use all the reflections.


In case anyone is interested, the reason this is a bit simplistic is that the number of reflections you need depends on how good your model is.  If you look at the contribution to the likelihood function from one reflection, it is very broad for low sigmaA values and becomes sharper as the sigmaA values increase.  This means that, if the true value of sigmaA is low, you need more reflections to get a precise estimate than if the true sigmaA value is high.  This happens because, if sigmaA is low, any value of Fo could be expected for a particular Fc because the model predicts the data poorly, but if sigmaA is high, then there is a very restricted range of possible values for Fo given Fc.  So to get stable refinement from a very poor model, you might need to set aside a larger number of reflections for cross-validated sigmaA estimation.  Later on, when the model is better, you could afford to absorb some of those reflections into the working set.
This agrees with Randy's explanation:
"In case anyone is interested, the reason this is a bit simplistic is that the number of reflections you need depends on how good your model is.  If you look at the contribution to the likelihood function from one reflection, it is very broad for low sigmaA values and becomes sharper as the sigmaA values increase.  This means that, if the true value of sigmaA is low, you need more reflections to get a precise estimate than if the true sigmaA value is high.  This happens because, if sigmaA is low, any value of Fo could be expected for a particular Fc because the model predicts the data poorly, but if sigmaA is high, then there is a very restricted range of possible values for Fo given Fc.  So to get stable refinement from a very poor model, you might need to set aside a larger number of reflections for cross-validated sigmaA estimation.  Later on, when the model is better, you could afford to absorb some of those reflections into the working set."
1,328

edits

Cookies help us deliver our services. By using our services, you agree to our use of cookies.

Navigation menu