Replies: 3 comments 2 replies
-
It is odd that the conversion doesn't take place. Is the result same for q8_0 output type? |
Beta Was this translation helpful? Give feedback.
-
Since you seem to have enough RAM for even a F32 copy of the model in memory, you could try By default, with lazy conversion, the max RAM usage should be around the max tensor size when converted to F32. So in your case that would be 2GiB of RAM (because the biggest tensor has around 500M parameters), which is weird since that should definitely fit in your system. I'm not sure what's happening either but I would like to understand if it's caused by the lazy conversion or something else.
Alternatively, I don't know if |
Beta Was this translation helpful? Give feedback.
-
Thank you all guys! I found out the reason of "process killed". It is because of lack of HDD space. My system disc space is only 30GB, it was full. Seems the llama.cpp created temp files on system disc during converting the safetensor to GGUF. I need to reconfigure my system space. Again, thank you all! |
Beta Was this translation helpful? Give feedback.
-
I had problems when I try to convert DeepSeek-R1-Distill-Qwen-7B safetensor files to GGUF, the converting progress stopped at 0%, then, it was interrupted. I checked the output file, there is a file of GGUF in the output path, but the size is only 5.7MB.
Here is what I have done,
I used command,
python3 convert_hf_to_gguf.py "/root/autodl-tmp/model/DeepSeek-R1-Distill-Qwen-7B/" --outtype f16 --outfile "/root/autodl-tmp/converted_model/DeepSeek-R1-Distill-Qwen-7B/f16.gguf" --verbose 2>&1 | tee -a conversion.log
Below is the log information,
INFO:hf-to-gguf:Loading model: DeepSeek-R1-Distill-Qwen-7B
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
INFO:hf-to-gguf:Exporting model...
INFO:hf-to-gguf:gguf: loading model weight map from 'model.safetensors.index.json'
INFO:hf-to-gguf:gguf: loading model part 'model-00001-of-000002.safetensors'
INFO:hf-to-gguf:token_embd.weight, torch.bfloat16 --> F16, shape = {3584, 152064}
INFO:hf-to-gguf:blk.0.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.0.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.0.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.0.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.0.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.0.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.0.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.0.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.0.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.0.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.0.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.0.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.1.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.1.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.1.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.1.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.1.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.1.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.1.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.1.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.1.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.1.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.1.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.1.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.10.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.10.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.10.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.10.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.10.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.10.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.10.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.10.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.10.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.10.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.10.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.10.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.11.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.11.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.11.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.11.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.11.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.11.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.11.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.11.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.11.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.11.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.11.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.11.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.12.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.12.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.12.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.12.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.12.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.12.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.12.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.12.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.12.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.12.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.12.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.12.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.13.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.13.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.13.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.13.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.13.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.13.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.13.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.13.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.13.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.13.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.13.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.13.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.14.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.14.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.14.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.14.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.14.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.14.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.14.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.14.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.14.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.14.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.14.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.14.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.15.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.15.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.15.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.15.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.15.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.15.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.15.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.15.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.15.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.15.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.15.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.15.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.16.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.16.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.16.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.16.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.16.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.16.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.16.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.2.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.2.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.2.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.2.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.2.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.2.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.2.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.2.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.2.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.2.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.2.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.2.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.3.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.3.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.3.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.3.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.3.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.3.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.3.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.3.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.3.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.3.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.3.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.3.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.4.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.4.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.4.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.4.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.4.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.4.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.4.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.4.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.4.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.4.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.4.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.4.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.5.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.5.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.5.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.5.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.5.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.5.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.5.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.5.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.5.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.5.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.5.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.5.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.6.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.6.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.6.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.6.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.6.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.6.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.6.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.6.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.6.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.6.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.6.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.6.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.7.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.7.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.7.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.7.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.7.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.7.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.7.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.7.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.7.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.7.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.7.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.7.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.8.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.8.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.8.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.8.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.8.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.8.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.8.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.8.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.8.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.8.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.8.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.8.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.9.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.9.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.9.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.9.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.9.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.9.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.9.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.9.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.9.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.9.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.9.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.9.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:gguf: loading model part 'model-00002-of-000002.safetensors'
INFO:hf-to-gguf:output.weight, torch.bfloat16 --> F16, shape = {3584, 152064}
INFO:hf-to-gguf:blk.16.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.16.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.16.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.16.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.16.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.17.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.17.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.17.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.17.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.17.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.17.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.17.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.17.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.17.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.17.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.17.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.17.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.18.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.18.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.18.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.18.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.18.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.18.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.18.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.18.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.18.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.18.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.18.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.18.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.19.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.19.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.19.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.19.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.19.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.19.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.19.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.19.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.19.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.19.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.19.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.19.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.20.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.20.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.20.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.20.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.20.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.20.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.20.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.20.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.20.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.20.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.20.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.20.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.21.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.21.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.21.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.21.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.21.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.21.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.21.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.21.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.21.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.21.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.21.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.21.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.22.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.22.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.22.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.22.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.22.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.22.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.22.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.22.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.22.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.22.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.22.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.22.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.23.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.23.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.23.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.23.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.23.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.23.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.23.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.23.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.23.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.23.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.23.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.23.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.24.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.24.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.24.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.24.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.24.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.24.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.24.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.24.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.24.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.24.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.24.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.24.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.25.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.25.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.25.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.25.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.25.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.25.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.25.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.25.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.25.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.25.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.25.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.25.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.26.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.26.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.26.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.26.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.26.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.26.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.26.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.26.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.26.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.26.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.26.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.26.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.27.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.27.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584}
INFO:hf-to-gguf:blk.27.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.27.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944}
INFO:hf-to-gguf:blk.27.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.27.attn_k.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.27.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:blk.27.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.27.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.27.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.27.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.27.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512}
INFO:hf-to-gguf:output_norm.weight, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:Set meta model
INFO:hf-to-gguf:Set model parameters
INFO:hf-to-gguf:gguf: context length = 131072
INFO:hf-to-gguf:gguf: embedding length = 3584
INFO:hf-to-gguf:gguf: feed forward length = 18944
INFO:hf-to-gguf:gguf: head count = 28
INFO:hf-to-gguf:gguf: key-value head count = 4
INFO:hf-to-gguf:gguf: rope theta = 10000
INFO:hf-to-gguf:gguf: rms norm epsilon = 1e-06
INFO:hf-to-gguf:gguf: file type = 1
INFO:hf-to-gguf:Set model tokenizer
DEBUG:hf-to-gguf:chktok: [151646, 198, 4710, 14731, 65497, 7847, 1572, 2303, 78672, 10947, 145836, 320, 8252, 8, 26525, 114, 378, 235, 149921, 30543, 320, 35673, 99066, 97534, 8, 25521, 227, 11162, 99, 247, 149955, 220, 18, 220, 18, 18, 220, 18, 18, 18, 220, 18, 18, 18, 18, 220, 18, 18, 18, 18, 18, 220, 18, 18, 18, 18, 18, 18, 220, 18, 18, 18, 18, 18, 18, 18, 220, 18, 18, 18, 18, 18, 18, 18, 18, 220, 18, 13, 18, 220, 18, 496, 18, 220, 18, 1112, 18, 220, 146394, 97529, 241, 44258, 233, 146568, 44258, 224, 147603, 20879, 115, 146280, 44258, 223, 146280, 147272, 97529, 227, 144534, 937, 104100, 18493, 22377, 99257, 16, 18, 16, 19, 16, 20, 16, 35727, 21216, 55460, 53237, 18658, 14144, 1456, 13073, 63471, 33594, 3038, 133178, 79012, 3355, 4605, 4605, 13874, 13874, 73594, 3014, 3014, 28149, 17085, 2928, 26610, 7646, 358, 3003, 1012, 364, 83, 813, 566, 594, 1052, 11, 364, 787, 498, 2704, 30, 364, 44, 537, 2704, 358, 3278, 1281, 432, 11, 364, 35, 498, 1075, 1045, 15243, 30, 1205, 6, 42612, 264, 63866, 43]
DEBUG:hf-to-gguf:chkhsh: b3f499bb4255f8ca19fccd664443283318f2fd2414d5e0b040fbdd0cc195d6c5
DEBUG:hf-to-gguf:tokenizer.ggml.pre: 'deepseek-r1-qwen'
DEBUG:hf-to-gguf:chkhsh: b3f499bb4255f8ca19fccd664443283318f2fd2414d5e0b040fbdd0cc195d6c5
INFO:gguf.vocab:Adding 151387 merge(s).
INFO:gguf.vocab:Setting special token type bos to 151646
INFO:gguf.vocab:Setting special token type eos to 151643
INFO:gguf.vocab:Setting special token type pad to 151643
INFO:gguf.vocab:Setting add_bos_token to True
INFO:gguf.vocab:Setting add_eos_token to False
INFO:gguf.vocab:Setting chat_template to {% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% set ns = namespace(is_first=false, is_tool=false, is_output_first=true, system_prompt='') %}{%- for message in messages %}{%- if message['role'] == 'system' %}{% set ns.system_prompt = message['content'] %}{%- endif %}{%- endfor %}{{bos_token}}{{ns.system_prompt}}{%- for message in messages %}{%- if message['role'] == 'user' %}{%- set ns.is_tool = false -%}{{'<|User|>' + message['content']}}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is none %}{%- set ns.is_tool = false -%}{%- for tool in message['tool_calls']%}{%- if not ns.is_first %}{{'<|Assistant|><|tool▁calls▁begin|><|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\n' + '
json' + '\n' + tool['function']['arguments'] + '\n' + '
' + '<|tool▁call▁end|>'}}{%- set ns.is_first = true -%}{%- else %}{{'\n' + '<|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\n' + 'json' + '\n' + tool['function']['arguments'] + '\n' + '
' + '<|tool▁call▁end|>'}}{{'<|tool▁calls▁end|><|end▁of▁sentence|>'}}{%- endif %}{%- endfor %}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is not none %}{%- if ns.is_tool %}{{'<|tool▁outputs▁end|>' + message['content'] + '<|end▁of▁sentence|>'}}{%- set ns.is_tool = false -%}{%- else %}{% set content = message['content'] %}{% if '' in content %}{% set content = content.split('')[-1] %}{% endif %}{{'<|Assistant|>' + content + '<|end▁of▁sentence|>'}}{%- endif %}{%- endif %}{%- if message['role'] == 'tool' %}{%- set ns.is_tool = true -%}{%- if ns.is_output_first %}{{'<|tool▁outputs▁begin|><|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- set ns.is_output_first = false %}{%- else %}{{'\n<|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- endif %}{%- endif %}{%- endfor -%}{% if ns.is_tool %}{{'<|tool▁outputs▁end|>'}}{% endif %}{% if add_generation_prompt and not ns.is_tool %}{{'<|Assistant|>\n'}}{% endif %}INFO:hf-to-gguf:Set model quantization version
INFO:gguf.gguf_writer:Writing the following files:
INFO:gguf.gguf_writer:/root/autodl-tmp/converted_model/DeepSeek-R1-Distill-Qwen-7B/f16.gguf: n_tensors = 339, total_size = 15.2G
Writing: 0%| | 0.00/15.2G [00:00<?, ?byte/s]
Killed
I really appreciate if anyone can help me to figure out what happened
Beta Was this translation helpful? Give feedback.
All reactions