Ggml-org/gguf-my-repo the script fails with Flux FP8

In ggml-org/gguf-my-repo the script does not accept my Flux1 Dev models FP8 custom (It compains, “I can´t convert to FP16”). So I have to make a 16 bits custom version which is too big 28Gb-33GB, and when I Upload the model, the uploading fails (“error” message). I think the solution is if HF can modify the python scripts to accept FP8 models. I also don´t know what are the limits to upload one file and the total admitted by the Free repo box, the spaces gguf-my repo only works with 16 bit and models are to heavy to upload

Try this.

Thank you for your intervention suggesting another method of conversion to gguf. I have dealt with my flux model and also with the original Black Forest Flux1.Dev and when pressing submit does not occur conversion in any case. I tried black-forest-labs/FLUX.1-dev/flux1-dev.safetensors as instructed and I can’t generate gguf Q4_1. I must be making a mistake and I don’t know what it is. Thank you for your time and attention

cbrescia

No, maybe it’s not your fault; the GGUF-related stuff is all sorts of buggy, partly because it’s proprietary to a different company than HF, and HF’s support is lagging behind.
Also, the official BFL dev is a gated repo, which is one you can’t read it without authentication.
If your file is made by ComfyUI, it must throw an error because the format has actually changed a bit.

All in all, these are problems that the HF techs should do something about. I know it’s a lot of work for them to do, but still.

If your PC is powerful enough, maybe you should try this? (Sorry in Japanese)