Task 5461847

Name RF_SAVE_ALL_OUT_NOJRAN_IGNORE_THE_REST_validation_env_g_pred_38_16903_4_1
Workunit 4849244
Created 14 Jun 2024, 2:20:25 UTC
Sent 14 Jun 2024, 4:22:38 UTC
Report deadline 15 Jun 2024, 4:22:38 UTC
Received 14 Jun 2024, 4:25:15 UTC
Server state Over
Outcome Computation error
Client state Compute error
Exit status 12 (0x0000000C) Unknown error code
Computer ID 34688
Run time 8 sec
CPU time
Validate state Invalid
Credit 0.00
Device peak FLOPS 5.09 GFLOPS
Application version Generalized biomolecular modeling and design with RoseTTAFold All-Atom v0.02 (nvidia_alpha)
windows_x86_64
Peak working set size 233.25 MB
Peak swap size 4,363.30 MB
Peak disk usage 2.11 MB

Stderr output

<core_client_version>7.24.1</core_client_version>
<![CDATA[
<message>
The access code is invalid.
 (0xc) - exit code 12 (0xc)</message>
<stderr_txt>
D:\BOINC_data\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\cuda\__init__.py:83: UserWarning: 
    Found GPU%d %s which is of cuda capability %d.%d.
    PyTorch no longer supports this GPU because it is too old.
    The minimum cuda capability supported by this library is %d.%d.
    
  warnings.warn(old_gpu_warn.format(d, name, major, minor, min_arch // 10, min_arch % 10))
Traceback (most recent call last):
  File "D:\BOINC_data\projects\ralph.bakerlab.org\cv2\rf2aa\predict.py", line 692, in <module>
    pred = Predictor(args)
  File "D:\BOINC_data\projects\ralph.bakerlab.org\cv2\rf2aa\predict.py", line 272, in __init__
    aamask = util.allatom_mask.to(self.device),
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

</stderr_txt>
]]>




©2024 University of Washington
http://www.bakerlab.org