Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When quantizing gemma2 in W8A8 format, the input is not positive-definite and gemma2-27B cannot be quantized. #1152

Open
HelloCard opened this issue Feb 14, 2025 · 4 comments
Assignees
Labels
bug Something isn't working

Comments

@HelloCard
Copy link

HelloCard commented Feb 14, 2025

Describe the bug
I have tried it on v0.1.0 to v0.4.0, and the performance is the same, with a very large error value and the prompt "input is not positive-definite".
The model is byroneverson/gemma-2-27b-it-abliterated, and it should be fairly easy to reproduce the problem.
Below is the error message, the running environment, and the script I used.

Environment

root@autodl-container-98fc43be4e-6d3c1325:~/autodl-tmp# python collect_env.py 
INFO 02-14 17:47:48 __init__.py:183] Automatically detected platform cuda.
WARNING 02-14 17:47:48 cuda.py:32] You are using a deprecated `pynvml` package. Please install `nvidia-ml-py` instead, and make sure to uninstall `pynvml`. When both of them are installed, `pynvml` will take precedence and cause errors. See https://pypi.org/project/pynvml for more information.
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35

Python version: 3.12.3 | packaged by Anaconda, Inc. | (main, May  6 2024, 19:46:43) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.19.90-2107.6.0.0192.8.oe1.bclinux.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A800 80GB PCIe
Nvidia driver version: 550.54.14
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Address sizes:                   46 bits physical, 57 bits virtual
Byte Order:                      Little Endian
CPU(s):                          112
On-line CPU(s) list:             0-111
Vendor ID:                       GenuineIntel
Model name:                      Intel(R) Xeon(R) Gold 6348 CPU @ 2.60GHz
CPU family:                      6
Model:                           106
Thread(s) per core:              2
Core(s) per socket:              28
Socket(s):                       2
Stepping:                        6
Frequency boost:                 enabled
CPU max MHz:                     2601.0000
CPU min MHz:                     800.0000
BogoMIPS:                        5200.00
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid md_clear pconfig flush_l1d arch_capabilities
Virtualization:                  VT-x
L1d cache:                       2.6 MiB (56 instances)
L1i cache:                       1.8 MiB (56 instances)
L2 cache:                        70 MiB (56 instances)
L3 cache:                        84 MiB (2 instances)
NUMA node(s):                    2
NUMA node0 CPU(s):               0-27,56-83
NUMA node1 CPU(s):               28-55,84-111
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Mmio stale data:   Vulnerable: Clear CPU buffers attempted, no microcode; SMT vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Vulnerable: eIBRS with unprivileged eBPF
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-ml-py==12.570.86
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pynvml==11.5.3
[pip3] pyzmq==26.2.0
[pip3] torch==2.5.1+cu124
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1+cu124
[pip3] transformers==4.47.1
[pip3] triton==3.1.0
[conda] numpy                     1.26.4                   pypi_0    pypi
[conda] nvidia-cublas-cu12        12.4.5.8                 pypi_0    pypi
[conda] nvidia-cuda-cupti-cu12    12.4.127                 pypi_0    pypi
[conda] nvidia-cuda-nvrtc-cu12    12.4.127                 pypi_0    pypi
[conda] nvidia-cuda-runtime-cu12  12.4.127                 pypi_0    pypi
[conda] nvidia-cudnn-cu12         9.1.0.70                 pypi_0    pypi
[conda] nvidia-cufft-cu12         11.2.1.3                 pypi_0    pypi
[conda] nvidia-curand-cu12        10.3.5.147               pypi_0    pypi
[conda] nvidia-cusolver-cu12      11.6.1.9                 pypi_0    pypi
[conda] nvidia-cusparse-cu12      12.3.1.170               pypi_0    pypi
[conda] nvidia-ml-py              12.570.86                pypi_0    pypi
[conda] nvidia-nccl-cu12          2.21.5                   pypi_0    pypi
[conda] nvidia-nvjitlink-cu12     12.4.127                 pypi_0    pypi
[conda] nvidia-nvtx-cu12          12.4.127                 pypi_0    pypi
[conda] pynvml                    11.5.3                   pypi_0    pypi
[conda] pyzmq                     26.2.0                   pypi_0    pypi
[conda] torch                     2.5.1+cu124              pypi_0    pypi
[conda] torchaudio                2.5.1                    pypi_0    pypi
[conda] torchvision               0.20.1+cu124             pypi_0    pypi
[conda] transformers              4.47.1                   pypi_0    pypi
[conda] triton                    3.1.0                    pypi_0    pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.7.1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    NIC0    NIC1    NIC2    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      SYS     PXB     SYS     0-27,56-83      0               N/A
NIC0    SYS      X      SYS     SYS
NIC1    PXB     SYS      X      SYS
NIC2    SYS     SYS     SYS      X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1
  NIC2: mlx5_bond_0

NVIDIA_VISIBLE_DEVICES=GPU-9ad98301-8d39-acb0-5321-de936f348742
NVIDIA_REQUIRE_CUDA=cuda>=12.4 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=geforce,driver>=470,driver<471 brand=geforcertx,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=titan,driver>=470,driver<471 brand=titanrtx,driver>=470,driver<471 brand=tesla,driver>=525,driver<526 brand=unknown,driver>=525,driver<526 brand=nvidia,driver>=525,driver<526 brand=nvidiartx,driver>=525,driver<526 brand=geforce,driver>=525,driver<526 brand=geforcertx,driver>=525,driver<526 brand=quadro,driver>=525,driver<526 brand=quadrortx,driver>=525,driver<526 brand=titan,driver>=525,driver<526 brand=titanrtx,driver>=525,driver<526 brand=tesla,driver>=535,driver<536 brand=unknown,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=geforce,driver>=535,driver<536 brand=geforcertx,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=titan,driver>=535,driver<536 brand=titanrtx,driver>=535,driver<536
NCCL_VERSION=2.21.5-1
NVIDIA_DRIVER_CAPABILITIES=compute,utility,graphics,video
NVIDIA_PRODUCT_NAME=CUDA
CUDA_VERSION=12.4.1
LD_LIBRARY_PATH=/root/miniconda3/lib/python3.12/site-packages/cv2/../../lib64:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
OMP_NUM_THREADS=14
MKL_NUM_THREADS=14
NCCL_CUMEM_ENABLE=0
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY

To Reproduce

from transformers import AutoTokenizer, AutoModelForCausalLM
from llmcompressor.transformers.compression.helpers import calculate_offload_device_map

MODEL_ID = "/root/autodl-tmp/gemma-2-27b-it-abliterated"
device_map = calculate_offload_device_map(
    MODEL_ID, num_gpus=1, reserve_for_hessians=True, torch_dtype="auto")
model = AutoModelForCausalLM.from_pretrained(
    MODEL_ID, device_map=device_map, torch_dtype="auto",
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)



from datasets import load_dataset

NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 4096

# Load and preprocess the dataset
ds = load_dataset("/root/autodl-tmp/ultrachat_2k", split="train_sft")
ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))

def preprocess(example):
    return {"text": tokenizer.apply_chat_template(example["messages"], tokenize=False)}
ds = ds.map(preprocess)

def tokenize(sample):
    return tokenizer(sample["text"], padding=False, max_length=MAX_SEQUENCE_LENGTH, truncation=True, add_special_tokens=False)
ds = ds.map(tokenize, remove_columns=ds.column_names)



from llmcompressor.transformers import oneshot
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.modifiers.smoothquant import SmoothQuantModifier

# Configure the quantization algorithms

recipe = [
    SmoothQuantModifier(smoothing_strength=0.80),
    GPTQModifier(targets="Linear", scheme="W8A8", ignore=["lm_head"], sequential_update=True),
]


# Apply quantization
oneshot(
    model=model,
    dataset=ds,
    recipe=recipe,
    max_seq_length=MAX_SEQUENCE_LENGTH,
    num_calibration_samples=NUM_CALIBRATION_SAMPLES,
)

# Save the compressed model
SAVE_DIR = MODEL_ID.split("/")[1] + "-W8A8-Dynamic-Per-Token"
model.save_pretrained(SAVE_DIR, save_compressed=True)
tokenizer.save_pretrained(SAVE_DIR)

Errors

root@autodl-container-98fc43be4e-6d3c1325:~/autodl-tmp# python quant.py 
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 12/12 [00:11<00:00,  1.06it/s]
2025-02-14T17:30:00.614862+0800 | main | WARNING - Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: True, 16-bits training: False
2025-02-14T17:30:00.615331+0800 | main | INFO - Training/evaluation parameters TrainingArguments(
_n_gpu=1,
accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False},
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
auto_find_batch_size=False,
average_tokens_across_devices=False,
batch_eval_metrics=False,
bf16=False,
bf16_full_eval=False,
clear_sparse_session=False,
data_seed=None,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_persistent_workers=False,
dataloader_pin_memory=True,
dataloader_prefetch_factor=None,
ddp_backend=None,
ddp_broadcast_buffers=None,
ddp_bucket_cap_mb=None,
ddp_find_unused_parameters=None,
ddp_timeout=1800,
debug=[],
deepspeed=None,
disable_tqdm=False,
dispatch_batches=None,
do_eval=False,
do_oneshot=True,
do_predict=False,
do_train=False,
eval_accumulation_steps=None,
eval_delay=0,
eval_do_concat_batches=True,
eval_on_start=False,
eval_steps=None,
eval_strategy=IntervalStrategy.NO,
eval_use_gather_object=False,
evaluation_strategy=None,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
fsdp=[],
fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False},
fsdp_min_num_params=0,
fsdp_transformer_layer_cls_to_wrap=None,
full_determinism=False,
gradient_accumulation_steps=1,
gradient_checkpointing=False,
gradient_checkpointing_kwargs=None,
greater_is_better=None,
group_by_length=False,
half_precision_backend=auto,
hub_always_push=False,
hub_model_id=None,
hub_private_repo=None,
hub_strategy=HubStrategy.EVERY_SAVE,
hub_token=<HUB_TOKEN>,
ignore_data_skip=False,
include_for_metrics=[],
include_inputs_for_metrics=False,
include_num_input_tokens_seen=False,
include_tokens_per_second=False,
jit_mode_eval=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=5e-05,
length_column_name=length,
load_best_model_at_end=False,
local_rank=0,
log_level=passive,
log_level_replica=warning,
log_on_each_node=True,
logging_dir=./output/runs/Feb14_17-30-00_autodl-container-98fc43be4e-6d3c1325,
logging_first_step=False,
logging_nan_inf_filter=True,
logging_steps=500,
logging_strategy=IntervalStrategy.STEPS,
lr_scheduler_kwargs={},
lr_scheduler_type=SchedulerType.LINEAR,
max_grad_norm=1.0,
max_steps=-1,
metric_for_best_model=None,
mp_parameters=,
neftune_noise_alpha=None,
no_cuda=False,
num_train_epochs=3.0,
oneshot_device=cuda:0,
optim=OptimizerNames.ADAMW_TORCH,
optim_args=None,
optim_target_modules=None,
output_dir=./output,
overwrite_output_dir=False,
past_index=-1,
per_device_eval_batch_size=8,
per_device_train_batch_size=8,
prediction_loss_only=False,
push_to_hub=False,
push_to_hub_model_id=None,
push_to_hub_organization=None,
push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
ray_scope=last,
recipe=[SmoothQuantModifier(index=None, group=None, start=None, end=None, update=None, initialized_structure_=False, initialized_=False, finalized_=False, started_=False, ended_=False, smoothing_strength=0.8, mappings=[[['re:.*q_proj', 're:.*k_proj', 're:.*v_proj'], 're:.*input_layernorm'], [['re:.*gate_proj', 're:.*up_proj'], 're:.*post_attention_layernorm']], ignore=None, num_calibration_steps=None, calibration_function=None, hooks_=None, resolved_mappings_=None, scales_=None), GPTQModifier(index=None, group=None, start=None, end=None, update=None, initialized_structure_=False, initialized_=False, finalized_=False, started_=False, ended_=False, sequential_update=True, targets='Linear', sequential_targets=None, block_size=128, quantize=True, dampening_frac=0.01, config_groups=None, ignore=['lm_head'], disable_quantization_observer_epoch=None, num_calibration_steps=None, scheme='W8A8', model=None, layer_compressors_=None, compressible_layers_=None, quantization_modifier_=None)],
recipe_args=None,
remove_unused_columns=True,
report_to=['tensorboard'],
restore_callback_states_from_checkpoint=False,
resume_from_checkpoint=None,
run_name=./output,
run_stages=False,
save_compressed=True,
save_on_each_node=False,
save_only_model=False,
save_safetensors=True,
save_steps=500,
save_strategy=SaveStrategy.STEPS,
save_total_limit=None,
seed=42,
skip_memory_metrics=True,
split_batches=None,
tf32=None,
torch_compile=False,
torch_compile_backend=None,
torch_compile_mode=None,
torch_empty_cache_steps=None,
torchdynamo=None,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_cpu=False,
use_ipex=False,
use_legacy_prediction_loop=False,
use_liger_kernel=False,
use_mps_device=False,
warmup_ratio=0.0,
warmup_steps=0,
weight_decay=0.0,
)
2025-02-14T17:30:01.384373+0800 | _check_create_state | INFO - State created for compression lifecycle
2025-02-14T17:30:01.385587+0800 | pre_initialize_structure | INFO - Compression lifecycle structure pre-initialized for 0 modifiers
2025-02-14T17:30:01.386339+0800 | pre_initialize_structure | INFO - Compression lifecycle structure pre-initialized for 0 modifiers
/root/miniconda3/lib/python3.12/site-packages/llmcompressor/transformers/finetune/session_mixin.py:88: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `Trainer.__init__`. Use `processing_class` instead.
  super().__init__(**kwargs)
Detected kernel version 4.19.90, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
2025-02-14T17:30:01.483076+0800 | one_shot | INFO - *** One Shot ***
2025-02-14T17:30:01.486758+0800 | from_modifiers | INFO - Creating recipe from modifiers
2025-02-14T17:30:01.514360+0800 | _check_compile_recipe | INFO - Recipe compiled and 1 modifiers created
2025-02-14T17:30:03.558805+0800 | _calibrate | INFO - Running SmoothQuantModifier calibration with 512 samples...
  0%|                                                                                                                                                                                                 | 0/512 [00:00<?, ?it/s]The 'batch_size' argument of HybridCache is deprecated and will be removed in v4.49. Use the more precisely named 'max_batch_size' argument instead.
The 'batch_size' attribute of HybridCache is deprecated and will be removed in v4.49. Use the more precisely named 'self.max_batch_size' attribute instead.
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 512/512 [04:42<00:00,  1.81it/s]
2025-02-14T17:34:46.214148+0800 | _apply_smoothing | INFO - Smoothing activation scales...
2025-02-14T17:34:46.440419+0800 | on_initialize_structure | WARNING - GPTQ quantization is set to True without an active quantization modifier.
2025-02-14T17:34:46.440579+0800 | _build_quant_modifier | INFO - Building quantization modifier with args: {'targets': 'Linear', 'scheme': 'W8A8', 'ignore': ['lm_head']}
2025-02-14T17:34:46.479712+0800 | _check_calibration_data | INFO - Skipping QuantizationModifier calibration, it is not required for the provided quantization config.
2025-02-14T17:34:46.638168+0800 | initialize_compression | INFO - Preparing model.layers.0 for compression
2025-02-14T17:34:46.638286+0800 | initialize_compression | INFO - Preparing model.layers.1 for compression
2025-02-14T17:34:46.638330+0800 | initialize_compression | INFO - Preparing model.layers.2 for compression
2025-02-14T17:34:46.638364+0800 | initialize_compression | INFO - Preparing model.layers.3 for compression
2025-02-14T17:34:46.638394+0800 | initialize_compression | INFO - Preparing model.layers.4 for compression
2025-02-14T17:34:46.638422+0800 | initialize_compression | INFO - Preparing model.layers.5 for compression
2025-02-14T17:34:46.638456+0800 | initialize_compression | INFO - Preparing model.layers.6 for compression
2025-02-14T17:34:46.638485+0800 | initialize_compression | INFO - Preparing model.layers.7 for compression
2025-02-14T17:34:46.638513+0800 | initialize_compression | INFO - Preparing model.layers.8 for compression
2025-02-14T17:34:46.638540+0800 | initialize_compression | INFO - Preparing model.layers.9 for compression
2025-02-14T17:34:46.638567+0800 | initialize_compression | INFO - Preparing model.layers.10 for compression
2025-02-14T17:34:46.638593+0800 | initialize_compression | INFO - Preparing model.layers.11 for compression
2025-02-14T17:34:46.638620+0800 | initialize_compression | INFO - Preparing model.layers.12 for compression
2025-02-14T17:34:46.638648+0800 | initialize_compression | INFO - Preparing model.layers.13 for compression
2025-02-14T17:34:46.638677+0800 | initialize_compression | INFO - Preparing model.layers.14 for compression
2025-02-14T17:34:46.638704+0800 | initialize_compression | INFO - Preparing model.layers.15 for compression
2025-02-14T17:34:46.638729+0800 | initialize_compression | INFO - Preparing model.layers.16 for compression
2025-02-14T17:34:46.638755+0800 | initialize_compression | INFO - Preparing model.layers.17 for compression
2025-02-14T17:34:46.638781+0800 | initialize_compression | INFO - Preparing model.layers.18 for compression
2025-02-14T17:34:46.638806+0800 | initialize_compression | INFO - Preparing model.layers.19 for compression
2025-02-14T17:34:46.638830+0800 | initialize_compression | INFO - Preparing model.layers.20 for compression
2025-02-14T17:34:46.638855+0800 | initialize_compression | INFO - Preparing model.layers.21 for compression
2025-02-14T17:34:46.638879+0800 | initialize_compression | INFO - Preparing model.layers.22 for compression
2025-02-14T17:34:46.638906+0800 | initialize_compression | INFO - Preparing model.layers.23 for compression
2025-02-14T17:34:46.638933+0800 | initialize_compression | INFO - Preparing model.layers.24 for compression
2025-02-14T17:34:46.638960+0800 | initialize_compression | INFO - Preparing model.layers.25 for compression
2025-02-14T17:34:46.638986+0800 | initialize_compression | INFO - Preparing model.layers.26 for compression
2025-02-14T17:34:46.639013+0800 | initialize_compression | INFO - Preparing model.layers.27 for compression
2025-02-14T17:34:46.639039+0800 | initialize_compression | INFO - Preparing model.layers.28 for compression
2025-02-14T17:34:46.639064+0800 | initialize_compression | INFO - Preparing model.layers.29 for compression
2025-02-14T17:34:46.639093+0800 | initialize_compression | INFO - Preparing model.layers.30 for compression
2025-02-14T17:34:46.639120+0800 | initialize_compression | INFO - Preparing model.layers.31 for compression
2025-02-14T17:34:46.639151+0800 | initialize_compression | INFO - Preparing model.layers.32 for compression
2025-02-14T17:34:46.639178+0800 | initialize_compression | INFO - Preparing model.layers.33 for compression
2025-02-14T17:34:46.639203+0800 | initialize_compression | INFO - Preparing model.layers.34 for compression
2025-02-14T17:34:46.639229+0800 | initialize_compression | INFO - Preparing model.layers.35 for compression
2025-02-14T17:34:46.639255+0800 | initialize_compression | INFO - Preparing model.layers.36 for compression
2025-02-14T17:34:46.639281+0800 | initialize_compression | INFO - Preparing model.layers.37 for compression
2025-02-14T17:34:46.639307+0800 | initialize_compression | INFO - Preparing model.layers.38 for compression
2025-02-14T17:34:46.639333+0800 | initialize_compression | INFO - Preparing model.layers.39 for compression
2025-02-14T17:34:46.639360+0800 | initialize_compression | INFO - Preparing model.layers.40 for compression
2025-02-14T17:34:46.639386+0800 | initialize_compression | INFO - Preparing model.layers.41 for compression
2025-02-14T17:34:46.639414+0800 | initialize_compression | INFO - Preparing model.layers.42 for compression
2025-02-14T17:34:46.639445+0800 | initialize_compression | INFO - Preparing model.layers.43 for compression
2025-02-14T17:34:46.639474+0800 | initialize_compression | INFO - Preparing model.layers.44 for compression
2025-02-14T17:34:46.639502+0800 | initialize_compression | INFO - Preparing model.layers.45 for compression
2025-02-14T17:34:46.639601+0800 | apply_compression | INFO - Running GPTQModifier calibration with 512 samples...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 512/512 [00:04<00:00, 126.66it/s]
2025-02-14T17:34:50.686715+0800 | apply_compression | INFO - 
===== Compressing layer 1/46  =====
2025-02-14T17:34:50.691135+0800 | apply_compression | INFO - Calibrating model.layers.0...
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 512/512 [02:07<00:00,  4.01it/s]
2025-02-14T17:36:58.420643+0800 | compress_module | INFO - Compressing model.layers.0.model.layers.0.self_attn.q_proj...
2025-02-14T17:36:59.991431+0800 | compress | METRIC - time 1.57
2025-02-14T17:36:59.991746+0800 | compress | METRIC - error 6880958087168.00
2025-02-14T17:36:59.992172+0800 | compress | METRIC - GPU 0 | usage: 74.76% | total memory: 80 GB
2025-02-14T17:36:59.992274+0800 | compress | METRIC - Compressed layer size: 36.01171875 MB
done
2025-02-14T17:36:59.992501+0800 | compress_module | INFO - Compressing model.layers.0.model.layers.0.self_attn.k_proj...
2025-02-14T17:37:01.484486+0800 | compress | METRIC - time 1.49
2025-02-14T17:37:01.485302+0800 | compress | METRIC - error 135349264384.00
2025-02-14T17:37:01.485462+0800 | compress | METRIC - GPU 0 | usage: 74.77% | total memory: 80 GB
2025-02-14T17:37:01.485552+0800 | compress | METRIC - Compressed layer size: 18.005859375 MB
done
2025-02-14T17:37:01.485758+0800 | compress_module | INFO - Compressing model.layers.0.model.layers.0.self_attn.v_proj...
2025-02-14T17:37:02.978516+0800 | compress | METRIC - time 1.49
2025-02-14T17:37:02.979331+0800 | compress | METRIC - error 39995977728.00
2025-02-14T17:37:02.979491+0800 | compress | METRIC - GPU 0 | usage: 74.77% | total memory: 80 GB
2025-02-14T17:37:02.979568+0800 | compress | METRIC - Compressed layer size: 18.005859375 MB
done
2025-02-14T17:37:02.979773+0800 | compress_module | INFO - Compressing model.layers.0.model.layers.0.self_attn.o_proj...
2025-02-14T17:37:04.356546+0800 | compress | METRIC - time 1.38
2025-02-14T17:37:04.357346+0800 | compress | METRIC - error 142.44
2025-02-14T17:37:04.357511+0800 | compress | METRIC - GPU 0 | usage: 74.77% | total memory: 80 GB
2025-02-14T17:37:04.357593+0800 | compress | METRIC - Compressed layer size: 36.01318359375 MB
done
2025-02-14T17:37:04.357817+0800 | compress_module | INFO - Compressing model.layers.0.model.layers.0.mlp.gate_proj...
2025-02-14T17:37:06.438215+0800 | compress | METRIC - time 2.08
2025-02-14T17:37:06.439094+0800 | compress | METRIC - error 24191828.00
2025-02-14T17:37:06.439260+0800 | compress | METRIC - GPU 0 | usage: 77.47% | total memory: 80 GB
2025-02-14T17:37:06.439339+0800 | compress | METRIC - Compressed layer size: 324.10546875 MB
done
2025-02-14T17:37:06.439550+0800 | compress_module | INFO - Compressing model.layers.0.model.layers.0.mlp.up_proj...
2025-02-14T17:37:08.506756+0800 | compress | METRIC - time 2.07
2025-02-14T17:37:08.507528+0800 | compress | METRIC - error 22884912.00
2025-02-14T17:37:08.507678+0800 | compress | METRIC - GPU 0 | usage: 77.47% | total memory: 80 GB
2025-02-14T17:37:08.507755+0800 | compress | METRIC - Compressed layer size: 324.10546875 MB
done
2025-02-14T17:37:08.507954+0800 | compress_module | INFO - Compressing model.layers.0.model.layers.0.mlp.down_proj...
Traceback (most recent call last):
  File "/root/autodl-tmp/quant.py", line 44, in <module>
    oneshot(
  File "/root/miniconda3/lib/python3.12/site-packages/llmcompressor/transformers/finetune/text_generation.py", line 76, in oneshot
    main(model_args, data_args, training_args)
  File "/root/miniconda3/lib/python3.12/site-packages/llmcompressor/transformers/finetune/text_generation.py", line 364, in main
    stage_runner.one_shot()
  File "/root/miniconda3/lib/python3.12/site-packages/llmcompressor/transformers/finetune/runner.py", line 171, in one_shot
    self.trainer.one_shot(calibration_data=calib_data, stage=stage)
  File "/root/miniconda3/lib/python3.12/site-packages/llmcompressor/transformers/finetune/session_mixin.py", line 401, in one_shot
    apply(
  File "/root/miniconda3/lib/python3.12/site-packages/llmcompressor/core/session_functions.py", line 184, in apply
    return active_session().apply(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/miniconda3/lib/python3.12/site-packages/llmcompressor/core/session.py", line 210, in apply
    self.initialize(**kwargs)
  File "/root/miniconda3/lib/python3.12/site-packages/llmcompressor/core/session.py", line 156, in initialize
    mod_data = self._lifecycle.initialize(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/miniconda3/lib/python3.12/site-packages/llmcompressor/core/lifecycle.py", line 126, in initialize
    data = mod.initialize(state=self.state, **extras)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/miniconda3/lib/python3.12/site-packages/llmcompressor/modifiers/stage.py", line 124, in initialize
    modifier.initialize(state, **kwargs)
  File "/root/miniconda3/lib/python3.12/site-packages/llmcompressor/modifiers/modifier.py", line 118, in initialize
    initialized = self.on_initialize(state=state, **kwargs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/miniconda3/lib/python3.12/site-packages/llmcompressor/modifiers/quantization/gptq/base.py", line 187, in on_initialize
    self.apply_compression(calibration_dataloader)
  File "/root/miniconda3/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/root/miniconda3/lib/python3.12/site-packages/llmcompressor/modifiers/quantization/gptq/base.py", line 292, in apply_compression
    layer_compressor.compress()
  File "/root/miniconda3/lib/python3.12/site-packages/llmcompressor/modifiers/utils/layer_compressor.py", line 176, in compress
    self.layer.apply(compress_module)
  File "/root/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1029, in apply
    module.apply(fn)
  File "/root/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1029, in apply
    module.apply(fn)
  File "/root/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1030, in apply
    fn(self)
  File "/root/miniconda3/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/root/miniconda3/lib/python3.12/site-packages/llmcompressor/modifiers/utils/layer_compressor.py", line 172, in compress_module
    module.compress(**self.args)
  File "/root/miniconda3/lib/python3.12/site-packages/llmcompressor/modifiers/quantization/gptq/utils/gptq_wrapper.py", line 166, in compress
    self.H = torch.linalg.cholesky(self.H)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._C._LinAlgError: linalg.cholesky: The factorization could not be completed because the input is not positive-definite (the leading minor of order 33013 is not positive-definite).
@HelloCard HelloCard added the bug Something isn't working label Feb 14, 2025
@kylesayrs
Copy link
Collaborator

Hi @HelloCard!

This issue is due to inherit numerical instability in the GPTQ algorithm. Below is listed a few courses of action which can be help randomize the data to avoid instability. Updating to the latest release/ build from source may also fix the issue due to very slight algorithm implementation differences.

"Failed to invert hessian due to numerical instability. Consider "

Please let me know if none of these solutions work for you

@kylesayrs kylesayrs self-assigned this Feb 18, 2025
@HelloCard
Copy link
Author

@kylesayrs
Pull the latest source code and install: no effect
And use 4096 calibration corpus: no effect
And increase dampening_frac to 0.1: no effect
I have encountered the "input is not positive-definite" problem many, many times, and I am sure that the failure on gemma2 is different from the general "input is not positive-definite" failure.
The key is the abnormally large error value, which is unprecedented.
I suspect that the problem may come from other python libraries that llm-compressor depends on, because I remember that I once tried to quantize gemma-2-27b, although I gave up at that time and did not see such abnormal error values.

@kylesayrs
Copy link
Collaborator

@HelloCard Hm, I'll dig a little deeper into this. Hessian instability can sometimes be a symptom of incorrect data processor modeling. Please make sure that your dataset has non-identical samples and that your model's weights are loading correctly.

@HelloCard
Copy link
Author

HelloCard commented Feb 20, 2025

@kylesayrs
I have tried:
Resetting the Ubuntu system image, no effect
Using the number of calibration sets 512, 1024, 2048, 4096, no effect
And increase dampening_frac to 0.1: no effect
And using different seed shuffle calibration sets, no effect
Try all llm-compressor versions between 0.1 and 0.4, no effect
Performing quantization of Mistral-Small-24B-Instruct-2501 on the same script: completed successfully(https://huggingface.co/noneUsername/Mistral-Small-24B-Instruct-2501-W8A8)
Copying the script in neuralmagic/gemma-2-9b-it-quantized.w8a8 for quantization, no effect
Using lm_eval to test whether the downloaded bf16 format model is damaged: normal high score

Any further suggestions?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants