Added
- Results manager file to get results from a file, delete a result, and redispatch a result object.
- Results can also be awaited to only return a result if it has either been completed or failed.
- Results class which is used to store the results with all the information needed to be used again along with saving the results to a file functionality.
- A result object will be a mercurial object which will be updated by the dispatcher and saved to a file throughout the dispatching and execution parts.
- Direct manipulation of the transport graph inside a result object takes place.
- Utility to convert a function definition string to a function and vice-versa.
- Status class to denote the status of a result object and of each node execution in the transport graph.
- Start and end times are now also stored for each node execution as well as for the whole dispatch.
- Logging of `stdout` and `stderr` can be done by passing in the `log_stdout`, `log_stderr` named metadata respectively while dispatching.
- In order to get the result of a certain dispatch, the `dispatch_id`, the `results_dir`, and the `wait` parameter can be passed in. If everything is default, then only the dispatch id is required, waiting will not be done, and the result directory will be in the current working directory with folder name as `results/` inside which every new dispatch will have a new folder named according to their respective dispatch ids, containing:
- `result.pkl` - (Cloud)pickled result object.
- `result_info.yaml` - yaml file with high level information about the result and its execution.
- `dispatch_source.py` - python file generated, containing the original function definitions of lattice and electrons which can be used to dispatch again.
Changed
- `logfile` named metadata is now `slurm_logfile`.
- Instead of using `jsonpickle`, `cloudpickle` is being used everywhere to maintain consistency.
- `to_json` function uses `json` instead of `jsonpickle` now in electron and lattice definitions.
- `post_processing` moved to the dispatcher, so the dispatcher will now store a finished execution result in the results folder as specified by the user with no requirement of post processing it from the client/user side.
- `run_task` function in dispatcher modified to check if a node has completed execution and return it if it has, else continue its execution. This also takes care of cases if the server has been closed mid execution, then it can be started again from the last saved state, and the user won't have to wait for the whole execution.
- Instead of passing in the transport graph and dispatch id everywhere, the result object is being passed around, except for the `asyncio` part where the dispatch id and results directory is being passed which afterwards lets the core dispatcher know where to get the result object from and operate on it.
- Getting result of parent node executions of the graph, is now being done using the result object's graph. Storing of each execution's result is also done there.
- Tests updated to reflect the changes made. They are also being run in a serverless manner.
Removed
- `LatticeResult` class removed.
- `jsonpickle` requirement removed.
- `WorkflowExecutionResult`, `TaskExecutionResult`, and `ExecutionError` singleton classes removed.
Fixed
- Commented out the `jwt_required()` part in `covalent-dispatcher/_service/app.py`, may be removed in later iterations.
- Dispatcher server will now return the error message in the response of getting result if it fails instead of sending every result ever as a response.