Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Process POREC:pairsToCooler (93) terminated with an error exit status (137) #78

Open
webgbi opened this issue Sep 30, 2024 · 5 comments
Labels
question Further information is requested

Comments

@webgbi
Copy link

webgbi commented Sep 30, 2024

Ask away!

Hi, I was trying to run the Pore-c workflow on a local desktop which has 16 cores with 128Gb memory.

I kept getting error at the step pairsToCooler.

Please advise how I could fix this. Thank you!

Here is the log.

This is epi2me-labs/wf-pore-c v1.2.2-g9ce4a1b.

Searching input for [.fastq, .fastq.gz, .fq, .fq.gz] files.
executor > local (2060)
[51/8fec41] process > POREC:fastcat (1) [100%] 1 of 1 ✔
[b4/e2f93b] process > POREC:index_bam (1) [100%] 1 of 1 ✔
[50/e67ff5] process > POREC:prepare_genome:index_ref_fai (1) [100%] 1 of 1 ✔
[0a/677cc3] process > POREC:prepare_genome:index_ref_mmi (1) [100%] 1 of 1 ✔
[ae/200242] process > POREC:digest_align_annotate (973) [ 68%] 977 of 1428
[- ] process > POREC:haplotag_alignments -
[- ] process > POREC:merge_coordsorted_bams -
executor > local (2060)
[51/8fec41] process > POREC:fastcat (1) [100%] 1 of 1 ✔
[b4/e2f93b] process > POREC:index_bam (1) [100%] 1 of 1 ✔
[50/e67ff5] process > POREC:prepare_genome:index_ref_fai (1) [100%] 1 of 1 ✔
[0a/677cc3] process > POREC:prepare_genome:index_ref_mmi (1) [100%] 1 of 1 ✔
[ae/200242] process > POREC:digest_align_annotate (973) [ 68%] 977 of 1428
[- ] process > POREC:haplotag_alignments -
[- ] process > POREC:merge_coordsorted_bams -
executor > local (2060)
[51/8fec41] process > POREC:fastcat (1) [100%] 1 of 1 ✔
[b4/e2f93b] process > POREC:index_bam (1) [100%] 1 of 1 ✔
[50/e67ff5] process > POREC:prepare_genome:index_ref_fai (1) [100%] 1 of 1 ✔
[0a/677cc3] process > POREC:prepare_genome:index_ref_mmi (1) [100%] 1 of 1 ✔
[ae/200242] process > POREC:digest_align_annotate (973) [ 68%] 977 of 1428
[- ] process > POREC:haplotag_alignments -
[- ] process > POREC:merge_coordsorted_bams -
[- ] process > POREC:merge_namesorted_bams -
[59/edd75e] process > POREC:create_restriction_bed (1) [100%] 1 of 1 ✔
[fc/9a204f] process > POREC:to_pairs_file (977) [100%] 977 of 977
[2d/b54d1a] process > POREC:pairsToCooler (93) [ 8%] 80 of 977, failed: 1
[- ] process > POREC:merge_mcools -
[- ] process > POREC:merge_pairs -
[- ] process > POREC:merge_pairs_stats -
[- ] process > POREC:pair_stats_report -
[- ] process > POREC:merge_paired_end_bams -
[4a/b373dc] process > POREC:getVersions [100%] 1 of 1 ✔
[fc/bc2f17] process > POREC:getParams [100%] 1 of 1 ✔
[eb/961b6d] process > POREC:makeReport [100%] 1 of 1 ✔
[- ] process > POREC:prepare_hic -
[82/ce69f7] process > POREC:collectIngressResultsInDir (1) [100%] 1 of 1 ✔
[- ] process > POREC:get_filtered_out_bam -
[44/8533fb] process > publish (1) [100%] 1 of 1 ✔
ERROR ~ Error executing process > 'POREC:pairsToCooler (93)'

Caused by:
Process POREC:pairsToCooler (93) terminated with an error exit status (137)

Command executed:

cooler cload pairs -c1 2 -p1 3 -c2 4 -p2 5 fasta.fai:1000 pore-c_dudleya_duplex.pairs.gz pore-c_dudleya_duplex.pairs.cool

Command exit status:
137

@webgbi webgbi added the question Further information is requested label Sep 30, 2024
@sarahjeeeze
Copy link
Contributor

sarahjeeeze commented Oct 8, 2024

Hi, thanks for reporting this. Looks like the process runs out of memory and it is set fairly low. Out of interest how large is your input file? You could try adding to the config and resume the workflow

process {
    withName: 'pairsToCooler' {
    	memory = 16.GB
    }
}

@CarolinaA09
Copy link

Hello,

Thank you for your help, I added those lines to the config. However, I am still getting an error. I have attached the trace.txt
trace.txt.

And here is the error:

Plus 2 more processes waiting for tasks…
ERROR ~ Error executing process > 'POREC:pairsToCooler (1046)'

Caused by:
Process POREC:pairsToCooler (1046) terminated with an error exit status (1)

Command executed:

cooler cload pairs -c1 2 -p1 3 -c2 4 -p2 5 fasta.fai:1000 20240605_Dicanthelium_clandestinum_PoreC.pairs.gz 20240605_Dicanthelium_clandestinum_PoreC.pairs.cool

Command exit status:
1

Command output:
(empty)

Command error:
INFO: Environment variable SINGULARITYENV_NXF_TASK_WORKDIR is set, but APPTAINERENV_NXF_TASK_WORKDIR is preferred
INFO: Environment variable SINGULARITYENV_NXF_DEBUG is set, but APPTAINERENV_NXF_DEBUG is preferred
INFO:cooler.create:Writing chunk 0: tmpf_1uggzb.multi.cool::0
INFO:cooler.create:Creating cooler at "tmpf_1uggzb.multi.cool::/0"
INFO:cooler.create:Writing chroms
INFO:cooler.create:Writing bins
INFO:cooler.create:Writing pixels
INFO:cooler.create:Writing indexes
INFO:cooler.create:Writing info
INFO:cooler.create:Merging into 20240605_Dicanthelium_clandestinum_PoreC.pairs.cool
INFO:cooler.create:Creating cooler at "20240605_Dicanthelium_clandestinum_PoreC.pairs.cool::/"
INFO:cooler.create:Writing chroms
INFO:cooler.create:Writing bins
INFO:cooler.create:Writing pixels
INFO:cooler.reduce:nnzs: [0]
INFO:cooler.reduce:current: [0]
Traceback (most recent call last):
File "/home/epi2melabs/conda/bin/cooler", line 10, in
sys.exit(cli())
File "/home/epi2melabs/conda/lib/python3.8/site-packages/click/core.py", line 1157, in call
return self.main(*args, **kwargs)
File "/home/epi2melabs/conda/lib/python3.8/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/home/epi2melabs/conda/lib/python3.8/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/epi2melabs/conda/lib/python3.8/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/epi2melabs/conda/lib/python3.8/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/epi2melabs/conda/lib/python3.8/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/home/epi2melabs/conda/lib/python3.8/site-packages/cooler/cli/cload.py", line 584, in pairs
create_cooler(
File "/home/epi2melabs/conda/lib/python3.8/site-packages/cooler/create/_create.py", line 1038, in create_cooler
create_from_unordered(
File "/home/epi2melabs/conda/lib/python3.8/site-packages/cooler/create/_create.py", line 763, in create_from_unordered
create(cool_uri, bins, chunks, columns=columns, dtypes=dtypes, mode=mode, **kwargs)
File "/home/epi2melabs/conda/lib/python3.8/site-packages/cooler/create/_create.py", line 641, in create
nnz, ncontacts = write_pixels(
File "/home/epi2melabs/conda/lib/python3.8/site-packages/cooler/create/_create.py", line 211, in write_pixels
for i, chunk in enumerate(iterable):
File "/home/epi2melabs/conda/lib/python3.8/site-packages/cooler/reduce.py", line 151, in iter
combined = pd.concat(
File "/home/epi2melabs/conda/lib/python3.8/site-packages/pandas/util/_decorators.py", line 331, in wrapper
return func(*args, **kwargs)
File "/home/epi2melabs/conda/lib/python3.8/site-packages/pandas/core/reshape/concat.py", line 368, in concat
op = _Concatenator(
File "/home/epi2melabs/conda/lib/python3.8/site-packages/pandas/core/reshape/concat.py", line 425, in init
raise ValueError("No objects to concatenate")
ValueError: No objects to concatenate

Work dir:
/work/users/c/a/caroe/Bean_assembly_pipeline/Pore_C/work/a5/84799084b452c56c4c93fbaac61dd6

Tip: you can replicate the issue by changing to the process work dir and entering the command bash .command.run

-- Check '.nextflow.log' file for details
WARN: Killing running tasks (8)

@webgbi
Copy link
Author

webgbi commented Oct 22, 2024

Hi, thanks for reporting this. Looks like the process runs out of memory and it is set fairly low. Out of interest how large is your input file? You could try adding to the config and resume the workflow

process {
    withName: 'pairsToCooler' {
    	memory = 16.GB
    }
}

Hi Sarah,

Thank you so much for reply. May I ask how to add those line to the config? Nextflow and the pore-c pipeline are new for me

@sarahjeeeze
Copy link
Contributor

Hi, sorry for late response - did you manage to add in the end - you can add those to any file eg. 'additional.config' and then provide that file as an input parameter to the -c parameter so -c additional.config

@aslisadli
Copy link

aslisadli commented Dec 16, 2024

Hi, has the issue been solved with that config file? I have been getting the same error message with 16CPU - and 8G mem per CPU settings even if I added the config file shown above. I would be glad if you let me know how I can solve that issue.

executor > local (688)
[3b/d059ca] POREC:validate_sample_sheet | 1 of 1 ✔
[0e/aad11d] POREC:fastcat (4) | 4 of 4 ✔
[bb/d71057] POREC:index_bam (2) | 2 of 4
[- ] POREC:index_vcf -
[f4/15fabe] POR…e_genome:index_ref_mmi (1) | 1 of 1 ✔
[7a/05a44e] POR…est_align_annotate (58800) | 525 of 127254
[- ] POREC:haplotag_alignments -
[- ] POREC:merge_coordsorted_bams -
[- ] POREC:merge_namesorted_bams -
[0c/1595d0] POR…create_restriction_bed (1) | 1 of 1
[aa/f4640d] POREC:to_pairs_file (100) | 130 of 525
[12/b844f8] POREC:pairsToCooler (23) | 10 of 130, failed: 1
[- ] POREC:merge_mcools -
[- ] POREC:merge_pairs -
[- ] POREC:merge_pairs_stats -
[- ] POREC:pair_stats_report -
[- ] POREC:merge_paired_end_bams -
[04/c88288] POREC:getVersions | 1 of 1 ✔
[e7/a60be7] POREC:getParams | 1 of 1 ✔
[- ] POREC:makeReport | 0 of 1
[e8/08227d] POR…ectIngressResultsInDir (4) | 4 of 4 ✔
[5e/0e8dc2] publish (2) | 2 of 4
Plus 2 more processes waiting for tasks…
ERROR ~ Error executing process > 'POREC:pairsToCooler (23)'

Caused by:
Process POREC:pairsToCooler (23) terminated with an error exit status (1)

Command executed:

cooler cload pairs -c1 2 -p1 3 -c2 4 -p2 5 fasta.fai:1000 T1-1.pairs.gz T1-1.pairs.cool

Command exit status:
1

Command output:
(empty)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Development

No branches or pull requests

4 participants