For over 100 years, fingerprint evidence has been used as a valuable tool for the criminal justice system. Relying on the generalized premise of “uniqueness”, the forensic community has regarded fingerprint evidence as nearly infallible having the capacity to “individualize” the source of a fingerprint impression to a single individual. While the uniqueness of a complete record of friction ridge skin detail is generally undisputed, the extension of that premise to partial and degraded impressions has become a central issue of debate. Nevertheless, forensic science laboratories routinely use the terms “individualization” and “identification” in technical reports and expert witness testimony to express an association of a partial impression to a specific known source. Over the last several years, there has been growing criticism among the scientific and legal communities regarding the use of such terms to express source associations which rely on expert interpretation. The crux of the criticism is that these terms imply absolute certainty and infallibility to the fact-finder which has not been demonstrated by available scientific data. As a result, several authoritative scientific organizations have recommended forensic science laboratories not to report or testify, directly or by implication, to a source attribution to the exclusion of all others in the world or to assert 100% infallibility and state conclusions in absolute terms when dealing with population issues. Consequently, the traditional paradigm of reporting latent fingerprint conclusions with an implication of absolute certainty to a single source has been challenged. The underlying basis for the challenge pertains to the logic applied during the interpretation of the evidence and the framework by which that evidence is articulated. By recognizing the subtle, yet non-trivial differences in the logic, the fingerprint community may consider an alternative framework to report fingerprint evidence to ensure the certainties are not over or understated.