convertor successfully built; treat it as optional function temporarily since it needs extra 1GB for the large library - torch; if you want to convert large safetensors model (i.e., flux1, sd3.5, etc.) to gguf (16-bit), and quantize it further into smaller gguf model/file(s), then you could execute pip install torch for activating this function; it's a convenient tool actually, you just need to select a single safetensors file to convert, nothing else