Vector quantization reduces full-fidelity vectors to fewer bits, cutting memory usage and improving efficiency, though it may lower recall. MongoDB Atlas Vector Search supports automatic quantization for float embeddings and ingestion of pre-quantized vectors from models like Voyage AI. Recommended for applications with over 100,000 vectors.