Name | RF_SAVE_ALL_OUT_NOJRAN_IGNORE_THE_REST_validation_env_f_pred_262_16902_6_1 |
Workunit | 4848193 |
Created | 13 Jun 2024, 6:50:50 UTC |
Sent | 13 Jun 2024, 8:43:53 UTC |
Report deadline | 14 Jun 2024, 8:43:53 UTC |
Received | 13 Jun 2024, 12:37:31 UTC |
Server state | Over |
Outcome | Computation error |
Client state | Compute error |
Exit status | 12 (0x0000000C) Unknown error code |
Computer ID | 49970 |
Run time | 1 min 27 sec |
CPU time | |
Validate state | Invalid |
Credit | 0.00 |
Device peak FLOPS | 5.23 GFLOPS |
Application version | Generalized biomolecular modeling and design with RoseTTAFold All-Atom v0.02 (nvidia_alpha) windows_x86_64 |
Peak working set size | 2,839.63 MB |
Peak swap size | 10,049.73 MB |
Peak disk usage | 2.09 MB |
<core_client_version>8.0.2</core_client_version> <![CDATA[ <message> The access code is invalid. (0xc) - exit code 12 (0xc)</message> <stderr_txt> Traceback (most recent call last): File "C:\ProgramData\BOINC\projects\ralph.bakerlab.org\cv2\rf2aa\predict.py", line 708, in <module> pred.predict(out_name+f'_{n}', File "C:\ProgramData\BOINC\projects\ralph.bakerlab.org\cv2\rf2aa\predict.py", line 551, in predict logit_s, logit_aa_s, logit_pae, logit_pde, p_bind, pred_crds, alpha, pred_allatom, pred_lddt_binned, msa_prev, pair_prev, state_prev = self.model( File "C:\ProgramData\BOINC\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "C:\ProgramData\BOINC\projects\ralph.bakerlab.org\cv2\rf2aa\RoseTTAFoldModel.py", line 358, in forward msa, pair, xyz, alpha_s, xyz_allatom, state, symmsub = self.simulator( File "C:\ProgramData\BOINC\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "C:\ProgramData\BOINC\projects\ralph.bakerlab.org\cv2\rf2aa\Track_module.py", line 1084, in forward msa_full, pair, xyz, state, alpha, symmsub = self.extra_block[i_m](msa_full, pair, File "C:\ProgramData\BOINC\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "C:\ProgramData\BOINC\projects\ralph.bakerlab.org\cv2\rf2aa\Track_module.py", line 929, in forward xyz, state, alpha = self.str2str( File "C:\ProgramData\BOINC\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "C:\ProgramData\BOINC\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\cuda\amp\autocast_mode.py", line 141, in decorate_autocast return func(*args, **kwargs) File "C:\ProgramData\BOINC\projects\ralph.bakerlab.org\cv2\rf2aa\Track_module.py", line 503, in forward shift = self.se3(G, node.reshape(B*L, -1, 1), l1_feats, edge_feats) File "C:\ProgramData\BOINC\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "C:\ProgramData\BOINC\projects\ralph.bakerlab.org\cv2\rf2aa\SE3_network.py", line 96, in forward return self.se3(G, node_features, edge_features) File "C:\ProgramData\BOINC\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "C:\ProgramData\BOINC\projects\ralph.bakerlab.org\cv2\rf2aa/SE3Transformer\se3_transformer\model\transformer.py", line 185, in forward node_feats = self.graph_modules(node_feats, edge_feats, graph=graph, basis=basis) File "C:\ProgramData\BOINC\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "C:\ProgramData\BOINC\projects\ralph.bakerlab.org\cv2\rf2aa/SE3Transformer\se3_transformer\model\transformer.py", line 47, in forward input = module(input, *args, **kwargs) File "C:\ProgramData\BOINC\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "C:\ProgramData\BOINC\projects\ralph.bakerlab.org\cv2\rf2aa/SE3Transformer\se3_transformer\model\layers\attention.py", line 162, in forward fused_key_value = self.to_key_value(node_features, edge_features, graph, basis) File "C:\ProgramData\BOINC\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "C:\ProgramData\BOINC\projects\ralph.bakerlab.org\cv2\rf2aa/SE3Transformer\se3_transformer\model\layers\convolution.py", line 347, in forward out += self.conv_in[str(degree_in)](feature, invariant_edge_feats, File "C:\ProgramData\BOINC\projects\ralph.bakerlab.org\ev0\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "C:\ProgramData\BOINC\projects\ralph.bakerlab.org\cv2\rf2aa/SE3Transformer\se3_transformer\model\layers\convolution.py", line 193, in forward tmp = (features[e_i:e_j] @ basis_view.float()).view(e_j-e_i, -1, basis.shape[-1]) RuntimeError: CUDA out of memory. Tried to allocate 126.00 MiB (GPU 0; 24.00 GiB total capacity; 1.26 GiB already allocated; 2.03 GiB free; 1.37 GiB reserved in total by PyTorch) </stderr_txt> ]]>
©2024 University of Washington
http://www.bakerlab.org