This is my personal take on the subject. Certainly the beauty in the system is it is flexible enough to accommodate a variety of methods that match a user's inclination on how to best use the system for them.
With a biofeedback scan, we are scanning from say 76000 Hz to 152000 Hz when we use the default octave range. This range is exactly 76000 frequencies; however, since we are using a step size of 20 Hz, we actually are only scanning 3,800 frequencies.
When we then ask for the top 20 hits, we are getting back the top 0.5263% of those frequencies.
Given this, the result set we get back is already highly optimized. I see no value in grading such a set especially since the 3800 frequencies cover a complete octave range which by resonance, includes frequencies outside of this range -- essentially being a complete scan.
When we grade a frequency program that only has say 30 frequencies, there are many gaps that those 30 frequencies do not cover, and so grading has value here.
So essentially, my answer is to just take the result set from a full biofeedback scan and use them as is.
Grading has value when trying to sort through a limited set of frequencies.
Optimizing has value when looking to refine a set to you; however, I find less value in this unless the frequencies are below 4000 Hz. Here is the reason why.
A frequency has a effective range of resonance which is considered to be 0.025% of the frequency. For a frequency like 76000 Hz, this is +- 19 Hz, so anything between 75981 Hz and 76019 Hz is targeted. There is no need for decimal resolution even.
At 4000 Hz, 0.025% = 1 Hz. When using frequencies that are smaller, decimal accuracy has value and so optimizing a list to within 2 points of decimal accuracy carries weight.
In both cases (grading and optimizing) I only see them as being useful tools when working with existing frequency programs. All results from regular biofeedback scans are already graded and optimized.
As to why you get different results on each scan, this usually is a case of pathogenic noise. Consider it this way, if you ask for the top 20 Hits, and you have say a top 60 Hits that nearly rank the same over the other 3740 frequencies from a scan, those 60 hits will jostle for the top 20 on each scan. The act of scanning also targets those pathogens to some degree, so it is entirely possible that a hit that took position 19 drops down to position 49 when you re-scan even without applying the result set for a period of time.
Over time, as you eliminate the noise, those pathogens that are able to evade the immune system and are persistent will start to climb to the top 20 and remain until resolved. At this point, you will find your scans take on a more consistent (statistically) nature if you were to perform back to back scans without any application time to change the landscape of your pathogenic load.
For more details, please check the link: