Lithops

Latest version: v3.5.1

Safety actively analyzes 681881 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 15 of 15

1.0.4

Added
- Added rabbitmq logic
- Added is_notebook method
- Added wrapper to data_stream for partitioned data
- Created WrappedStreamingBodyPartition() class for data partitions
- Run multiple maps() in the same executor
- added multiple execution on the same executor tests
- added ibm_cos tests
- Added download_results parameter in monitor() method to ensure all results are downloaded
- Added pika package to dependencies
- Tracking new futures from a function
- Example of Docker image for dlib

Changed
- Change name of wait() method to monitor()
- Tuned up some parameters to speedup wait method
- Moved all partitioner logic into partitioner.py
- Installation script to support PW version as a parameter
- Changed way to get last row position on data partition
- Deleted duplicated get_result method
- Hide urllib3 annoying debug logs within functions
- Activate remote_invocation when data comes from COS
- Docs updated

Fixed
- Fixed partitioner data wrapper
- Fix tests to work with last partitioner update
- Fixed chunk threshold
- Fix printing plots
- Fixed rabbitmq config
- Some other fixes

1.0.3

Added
- Add support to generate execution timeline plots
- pi_estimation_using_monte_carlo_with_PyWren.ipynb
- stock_prediction_monte_carlo_with_PyWren.ipynb
- Added missing modules in setup.py
- Added memory usage in action logs

Changed
- Reverted cloudpickle usage
- Move default action memory to 512MB
- Improved installation script
- Refactored deploy_runtime script
- Updated deploy_runtime script, clone command
- Docs updated
- Finished map() and map_reduce() consistency issue
- Raise exception when some activation fails

Fixed
- Fixed issue reusing COS tokens
- Fixed partitioner object split
- Fixed map() and map_reduce() consistency issue
- Some other fixes

1.0.2

Added
- ibm_boto3 client as an input of the parameter 'ibm_cos' of the function
- Add functional test script
- Added retry logic
- Invoker logic
- Added missing init file
- Added user agent to identify pywren
- Added missing dependencies

Changed
- Abstracted COS logic for map() and map_reduce() methods
- pw.map() method through COS abstractions
- Retry mechanism when exception
- Use CloudPickle module in all project for serializing/unserializing
- Docs updated
- Storage separation
- Project update. 'bx' and 'wsk' CLI tools are no longer necessary
- Updated setup.py
- Deleted requirements.txt
- Updated default_preinstalls

Fixed
- Minor warnings fix
- Fixed logging
- step back with gevent until import fixed in openwhisk
- Fixed issue when running map_reduce job from COS keys
- Fixed issue retrieving results
- Fixed issue when processing a dataset from url
- Fixed issue when running map_reduce method
- Some other fixes

1.0.1

Added
- Example of the Docker image that is based on Anaconda
- Enabled support for multiple call_assync() calls in the same pw instance
- Added 'OutOfMemory' Exception
- Jupyter notebook example
- configurable cleaner for temporary data

Changed
- Changed and improved logging in order to log correctly within IBM Cloud Functions
- Changed Python interpreter of the data_cleaner method
- Moved some info prints to debug
- improved remote function invocation mechanism

Fixed
- Fixing flask security issues CVE-2018-1000656
- Fixed minor issue when futures is not a list
- Fixed default config exception. API KEY is not mandatory.
- Fixes in logging
- Some other fixes

1.0.0

First release.

- New runtime based on Docker images.
- Added IBM Cloud Functions API connector for function invocations.
- Added support for IBM Cloud Object Storage (COS) backend (or S3 API).
- Added support for OpenStack Swift backend (or Swift API).
- Added timeout while PyWren is getting the results. It prevents PyWren waits forever when some function fails and the results are not stored in COS.
- Created the ibmcf/default_config.yaml config file for storing the main PyWren configuration parameters such as Cloud Functions and COS access keys.
- Enabled to use PyWren within a PyWren function.
- Enabled redirections. Now it is possible to send a *Future* class as a response of a function. This means that the function has executed another function, and the local PyWren has to wait for another response from another invocation.
- Added a new **map_reduce()**-like method: Unlike the original **map** and **reduce** methods, this new one integrates an automatic data partitioning based on a chunk size provided by the user. Both the **map** and the **reduce** functions are orchestrated and executed within Cloud Functions and the user just waits for the final result provided by the reduce function.
- Automatic data discovering in the new **map_reduce()**-like method. With this method it is possible to specify a bucket name in order to process all the objects within it instead of specifying each object one by one.
- Created a function which removes residual data from COS when the PyWren execution finishes.
- The main **executor** is a *class* and not a *method*.
- All the methods available for the users are integrated within the main *executor class*.
- Added state in the main *executor class* in order to control the correct execution order of its methods (like a Turing machine)
- When a new *executor class* is instantiated, it is created a new unique **executor_id** used to store all the objects in COS, and to retrieve the results.
- When a new *executor class* is instantiated, it is created a *storage_handler* used in the all PyWren execution.
- Now it is possible to specify the **runtime** when the user instantiates the *executor class* instead of changing the config file every time (In the config file is specified the default runtime).
- The **logging level** is now specified when the user instantiates the *executor class* instead of put it in the first line of the code within an env variable.
- The PyWren code which is executed remotely as a wrapper of the function now uses the main storage handler as the rest of the PyWren code. In previous versions, PyWren creates a new storage client directly with *boto3* library instead of using pywren/storage/storage.py wrapper.
- Added support for multiple parameters in the functions which are executed remotely as a cloud functions. Previous versions just allows one parameter.
- Eased the usage of the storage backend within a function. By simply specifying *storage_handler* as a parameter of the function, the user will get access to the storage backend.
- Added a new method for retrieving the results of an execution called **fetch_all_resuslts()**. Previous PyWren versions already includes a method called *get_all_results()*, but this is a sequential method and it takes long time to retrieve all the results. It was also included a *wait()* class which is more similar to *get_all_results()* method, the main difference is that the new method is all based on *list the available objects in a bucket*, and it returns when all the tasks are finished. The new method also has the possibility to activate a progress bar in order to track the current status of the execution (really useful for larger executions).
- Added support for libs not included in the IBM Cloud Functions native image (*python-jessie:3*). Some libraries necessary for executing PyWren (remote code) are not included in the native CFs docker image. Now PyWren has the *pywren/libs* dir which includes all of these libraries, so it is possible to use PyWren with the native Docker image instead of building a new one with the missing libraries.
- In the COS backend, the *boto3* library was changed to the *ibm_boto3* library
- Increased function execution timeout to 600 seconds (10 minutes)
- Other minor changes

Page 15 of 15

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.