When used in this manner, standard deviation is often called the standard error of the mean, or standard error of the estimate with regard to a mean. In addition to expressing population variability, the standard deviation is also often used to measure statistical results such as the margin of error. Similar to other mathematical and statistical concepts, there are many different situations in which standard deviation can be used, and thus many different equations. Conversely, a higher standard deviation indicates a wider range of values. The lower the standard deviation, the closer the data points tend to be to the mean (or expected value), μ. Standard deviation in statistics, typically denoted by σ, is a measure of variation or dispersion (refers to a distribution's extent of stretching or squeezing) between values in a set of data. Prism offers six choices on the Method tab of nonlinear regression, and lets you test for appropriate weighting.Related Probability Calculator | Sample Size Calculator | Statistics Calculator The term weight is a bit misleading, since the goal is to remove that extra weight from the points with high Y values. But the goal of weighting is to make those differences be entirely random, not related to the value of Y. Of course random factors will give some points more scatter than others. The goal of weighting is for points anywhere on the curve to contribute equally to the sum-of-squares. You'd need to have four times as many replicates in the lower set to equalize the contribution to the sum-of-squares. This means, essentially, that the curve fitting procedure will work harder to bring the curve near these points, and relatively ignore the points with lower Y values. In other words, the set of replicates whose average Y value is twice that of another set will be given four times as much weight. Since regression minimizes the sum of the squares of those distances those points will be expected to contribute four times as much to the sum-of-squares as the points with the smaller average Y value. The average distance of the replicates from the true curve will be twice as large for the higher response. What happens if you fit a model to the data on the right without taking into account the fact that the scatter increases as Y increases? Consider two doses that give different responses that differ by a factor of two. In other words, the coefficient of variation (CV) is constant. When a response is twice as high as another response, the standard deviation among replicates is also twice as large. On the right, the SD is a constant fraction of the mean Y value. In the graph on the left, the SD of that Gaussian distribution is the same for all doses. In both cases, the scatter among replicates is sampled from a Gaussian distribution. As the curve goes up, variation among replicates increases. In the right graph, the standard deviation of the replicates is related to the value of Y. It is about the same all the way along the curve. In the left graph, the standard deviation of those replicates is consistent. Both graphs above show dose-response curves, with response measured in ten replicate values at each response.