Standard Deviation is a measurement of how far from the average (mean) the majority of a data set lies.
Standard Deviation is a measure of variability, and it is on a different scale for each data set being measured; there is no “standard” standard deviation. It is possible to normalize it for comparison to other data sets using measurements like r-squared and the sharpe ratio.
The number arrived at when computing standard deviation is going to reveal the distance, in terms of one of the quantifiable variables being observed, from the average, in either a positive or negative direction, within which 68% of the data set falls.
The computation for standard deviation basically involves squaring the data, averaging it, and taking the square root using squares helps us get things in terms of absolute value, representing distance from the mean in either direction, and computing it with squares allows it to be consistent with the parabolic shape of a normal distribution curve, or bell curve as it's known, which plots the normal distribution of a data set which is correlated but random.
Statistically, 68% of the data will fall within the one standard deviation (for an example we’ll say it’s a distance of 5 from the average), 95% of the data will fall within two standard deviations (10 from the average), and 99% of the data will fall within three standard deviations (15 units from the average).
Variance is a statistical measure which equals the Standard Deviation squared, and it comes up repeatedly in statistical calculations as well.