The wording of the OP isn’t all that clear.

Usually when I see references to accuracy, there is a false negative rate (the percentage chance that something/someone that SHOULD be identified, IS NOT), and a false positive rate (the percentage chance (the percentage chance that something/someone that SHOULD NOT be identified, IS).

So, suppose there is a genetic test for some disease, and the test has a 10% false positive rate and a 5% false negative rate.

John tests positive.

Sally tests negative.

What are the true chances that John really does have the disease, and that Sally does not?

Surprisingly, you can’t answer the question yet. You need to know the percentage of the population that actually has the condition.

If only one person out of 300 million ACTUALLY has the disease, but it has a 10% false positive rate, the vast vast majority of those who test positive don’t actually have it. (This is basically an exaggerated form of the problem in the OP).

OK, so I’m going to try to get the right formula here - if I make a mistake, someone please point it out:

Let fp be the false positive rate.

Let fn be the false negative rate.

Let tpp be the true positive rate in the tested population as a whole.

Then, for any test that returns positive, the chance that that is a correct diagnosis is:

((1-fn) * tpp) / (fp * (1-tpp))

(Again, no guarantees that I’ve got the formula exactly right, but I *think* I’ve got it.)

When tpp (the true rate of positives in the tested population) is very low, then even a small-ish false positive rate can cause the testing to be wildly inaccurate.