Name | RF_SAVE_ALL_OUT_NOJRAN_IGNORE_THE_REST_validation_env_g_pred_35_16903_5_1 |
Workunit | 4849266 |
Created | 14 Jun 2024, 0:21:08 UTC |
Sent | 14 Jun 2024, 4:17:50 UTC |
Report deadline | 15 Jun 2024, 4:17:50 UTC |
Received | 14 Jun 2024, 4:20:54 UTC |
Server state | Over |
Outcome | Computation error |
Client state | Compute error |
Exit status | 12 (0x0000000C) Unknown error code |
Computer ID | 34688 |
Run time | 20 sec |
CPU time | |
Validate state | Invalid |
Credit | 0.00 |
Device peak FLOPS | 5.09 GFLOPS |
Application version | Generalized biomolecular modeling and design with RoseTTAFold All-Atom v0.02 (nvidia_alpha) windows_x86_64 |
Peak working set size | 1,557.19 MB |
Peak swap size | 6,455.95 MB |
Peak disk usage | 2.11 MB |
<core_client_version>7.24.1</core_client_version> <![CDATA[ <message> The access code is invalid. (0xc) - exit code 12 (0xc)</message> <stderr_txt> D:\BOINC_data\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\cuda\__init__.py:83: UserWarning: Found GPU%d %s which is of cuda capability %d.%d. PyTorch no longer supports this GPU because it is too old. The minimum cuda capability supported by this library is %d.%d. warnings.warn(old_gpu_warn.format(d, name, major, minor, min_arch // 10, min_arch % 10)) Traceback (most recent call last): File "D:\BOINC_data\projects\ralph.bakerlab.org\cv2\rf2aa\predict.py", line 692, in <module> pred = Predictor(args) File "D:\BOINC_data\projects\ralph.bakerlab.org\cv2\rf2aa\predict.py", line 282, in __init__ checkpoint = torch.load(args.checkpoint, map_location=self.device) File "D:\BOINC_data\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\serialization.py", line 607, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "D:\BOINC_data\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\serialization.py", line 882, in _load result = unpickler.load() File "D:\BOINC_data\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\serialization.py", line 857, in persistent_load load_tensor(data_type, size, key, _maybe_decode_ascii(location)) File "D:\BOINC_data\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\serialization.py", line 846, in load_tensor loaded_storages[key] = restore_location(storage, location) File "D:\BOINC_data\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\serialization.py", line 824, in restore_location return default_restore_location(storage, map_location) File "D:\BOINC_data\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\serialization.py", line 175, in default_restore_location result = fn(storage, location) File "D:\BOINC_data\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\serialization.py", line 157, in _cuda_deserialize return obj.cuda(device) File "D:\BOINC_data\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\_utils.py", line 79, in _cuda return new_type(self.size()).copy_(self, non_blocking) File "D:\BOINC_data\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\cuda\__init__.py", line 528, in _lazy_new return super(_CudaBase, cls).__new__(cls, *args, **kwargs) RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 2.00 GiB total capacity; 1.35 GiB already allocated; 0 bytes free; 1.35 GiB reserved in total by PyTorch) </stderr_txt> ]]>
©2024 University of Washington
http://www.bakerlab.org