Precision measures how many returned results are actually relevant. In stores, high precision means the first page is full of great options—not noise.
Precision is an information-retrieval metric:
precision = relevant_results_returned / total_results_returned
.
Variants include Precision@k (e.g., P@10) and Average Precision; modern stacks also track NDCG for graded relevance.
Precision is about quality of what you show. Combine phrase/exact logic with LTR and guardrails, then validate with golden sets and A/B tests to keep top-k clean and profitable.
Precision vs recall?
Precision = “of what we showed, how much was relevant?” Recall = “of what’s relevant, how much did we find?”
Precision vs NDCG?
Precision is binary; NDCG handles graded relevance and position—often better for ranking evaluation.
How to raise precision without killing recall?
Use hybrid retrieval, re-rankers, and category-aware synonyms with caps.