Name | RF_SAVE_ALL_OUT_NOJRAN_IGNORE_THE_REST_validation_env_f_pred_32_16902_6_1 |
Workunit | 4846888 |
Created | 13 Jun 2024, 9:44:59 UTC |
Sent | 13 Jun 2024, 10:41:49 UTC |
Report deadline | 14 Jun 2024, 10:41:49 UTC |
Received | 13 Jun 2024, 11:03:05 UTC |
Server state | Over |
Outcome | Computation error |
Client state | Compute error |
Exit status | 12 (0x0000000C) Unknown error code |
Computer ID | 47675 |
Run time | 9 sec |
CPU time | |
Validate state | Invalid |
Credit | 0.00 |
Device peak FLOPS | 3.47 GFLOPS |
Application version | Generalized biomolecular modeling and design with RoseTTAFold All-Atom v0.02 (nvidia_alpha) windows_x86_64 |
Peak working set size | 288.90 MB |
Peak swap size | 4,410.58 MB |
Peak disk usage | 2.09 MB |
<core_client_version>7.24.1</core_client_version> <![CDATA[ <message> - exit code 12 (0xc)</message> <stderr_txt> D:\Programs\BOINC\Data\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\cuda\__init__.py:83: UserWarning: Found GPU%d %s which is of cuda capability %d.%d. PyTorch no longer supports this GPU because it is too old. The minimum cuda capability supported by this library is %d.%d. warnings.warn(old_gpu_warn.format(d, name, major, minor, min_arch // 10, min_arch % 10)) Traceback (most recent call last): File "D:\Programs\BOINC\Data\projects\ralph.bakerlab.org\cv2\rf2aa\predict.py", line 692, in <module> pred = Predictor(args) File "D:\Programs\BOINC\Data\projects\ralph.bakerlab.org\cv2\rf2aa\predict.py", line 270, in __init__ self.model = RoseTTAFoldModule( File "D:\Programs\BOINC\Data\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\nn\modules\module.py", line 852, in to return self._apply(convert) File "D:\Programs\BOINC\Data\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\nn\modules\module.py", line 530, in _apply module._apply(fn) File "D:\Programs\BOINC\Data\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\nn\modules\module.py", line 530, in _apply module._apply(fn) File "D:\Programs\BOINC\Data\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\nn\modules\module.py", line 530, in _apply module._apply(fn) [Previous line repeated 3 more times] File "D:\Programs\BOINC\Data\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\nn\modules\module.py", line 552, in _apply param_applied = fn(param) File "D:\Programs\BOINC\Data\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\nn\modules\module.py", line 850, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 2.00 GiB total capacity; 307.78 MiB already allocated; 1.63 MiB free; 308.00 MiB reserved in total by PyTorch) </stderr_txt> ]]>
©2024 University of Washington
http://www.bakerlab.org