Typedb-driver

Latest version: v2.29.0

Safety actively analyzes 682457 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 6 of 8

2.0.0alpha4

PyPI package: https://pypi.org/project/grakn-client
Documentation: https://dev.docs.grakn.ai/docs/client-api/python

Distribution

Available through https://pypi.org

pip install grakn-client==2.0.0-alpha-4


New Features

- **RPC implementation for sending match aggregate, match group, and match group aggregate queries**
We have implemented RPC support for sending match aggregate, match group, and match group aggregate queries to the server.

---

**Please refer to [full release notes of 2.0.0-alpha](https://github.com/graknlabs/client-python/releases/tag/2.0.0-alpha) to see the changes contained in 2.0.0.**

2.0.0alpha3

PyPI package: https://pypi.org/project/grakn-client
Documentation: https://dev.docs.grakn.ai/docs/client-api/python

Distribution

Available through https://pypi.org

pip install grakn-client==2.0.0-alpha-3


---

**Please refer to [full release notes of 2.0.0-alpha](https://github.com/graknlabs/client-python/releases/tag/2.0.0-alpha) to see the changes contained in 2.0.0.**

2.0.0alpha2

PyPI package: https://pypi.org/project/grakn-client
Documentation: https://dev.docs.grakn.ai/docs/client-api/python

Distribution

Available through https://pypi.org

pip install grakn-client==2.0.0-alpha-2


---

**Please refer to [full release notes of 2.0.0-alpha](https://github.com/graknlabs/client-python/releases/tag/2.0.0-alpha) to see the changes contained in 2.0.0.**

2.0.0alpha

PyPI package: https://pypi.org/project/grakn-client
Documentation: https://dev.docs.grakn.ai/docs/client-api/python

Distribution

Available through https://pypi.org

pip install grakn-client==2.0.0-alpha


New Client-Server Protocol: a [Reactive Stream](https://en.wikipedia.org/wiki/Reactive_Streams)

With the server performance scaled, we need to ensure the client-server communication was not a bottleneck. We want the client application to leverage the server's asynchronous parallel computation to receive as many answers as possible, as soon as they are ready. However, we don't want the client application to be overwhelmed with server responses. So, we needed some form of *"back-pressure"*. However, to maintain maximum throughput, everything had to be non-blocking. Sounds familiar? Well, it's the *"[reactive stream](https://www.reactive-streams.org)"* problem.

We took inspiration from [Java Flow](https://docs.oracle.com/javase/9/docs/api/java/util/concurrent/Flow.html) and [Akka Stream](https://doc.akka.io/docs/akka/current/stream/index.html), and built our own reactive stream over [GRPC](https://grpc.io), as lightweight as possible, with our unique optimisations. When an application sends a query from the client to the server, a (configurable) batch of asynchronously computed answers will immediately be streamed from the server to the client. This reduces network roundtrips and increases throughput. Once the first batch is consumed, the client will request another batch. We remove waiting time between the first and second batch, by predicting that duration and streaming back surplus answers for a period of that duration, at the end of every batch. This allows us to maintain a continuous stream of answers at maximum throughput, without overflowing the application.

We then hit the max limit of responses GRPC can send per second. So the last trick was to bundle multiple query answers into a single server RPC "response". The impact on query response time was negligible, but it dramatically increased answer throughput again.

The new client architecture and [Protobuf](https://developers.google.com/protocol-buffers) definitions are also hugely simplified to ease the developers' effort to build their own client libraries.

**Please refer to [full release notes of Grakn 2.0.0-alpha](https://github.com/graknlabs/grakn/releases/tag/2.0.0-alpha) to see the changes in Grakn 2.0.0.**

1.8.1

PyPI package: https://pypi.org/project/grakn-client
Documentation: https://dev.grakn.ai/docs/client-api/python

Distribution

Available through https://pypi.org

pip install grakn-client

Or you can upgrade your local installation with:

pip install -U grakn-client



New Features


Bugs Fixed

- **Fix leaking GRPC threads on transaction error.**
We block the GRPC request observer in order to wait for new client requests. If the transaction errors and we do not unblock this observer, the GRPC thread will be left waiting forever. Previously, we had an error case that would result in this, so this PR patches that case by allowing `close()` to work correctly even on an error.

Code Refactors


Other Improvements

- **Fix CI by including needed dependencies.**
Previous PR (120) didn't account for need to call `graknlabs_dependencies//tool/sync:dependencies` in one of the CI jobs. Therefore, we need to restore some imports that are actually needed.

- **Cleanup WORKSPACE file from extraneous load statements.**
In order for keeping codebase clean and maintainable, extraneous dependencies should not be present in `WORKSPACE`

1.8.0

PyPI package: https://pypi.org/project/grakn-client
Documentation: https://dev.grakn.ai/docs/client-api/python

Distribution

Available through https://pypi.org

pip install grakn-client

Or you can upgrade your local installation with:

pip install -U grakn-client



New Features

- **Introduce further Query Options.**
We introduce modified `infer`, and new `batch_size`, and `explain` options for queries:

Transaction.query("...",
infer=Transaction.Options.SERVER_DEFAULT,
explain=Transaction.Options.SERVER_DEFAULT,
batch_size=Transaction.Options.SERVER_DEFAULT)

The default `SERVER_DEFAULT` value means that the server will automatically choose the default value for `infer`, `explain`, and `batch_size`. For reference, the server will default to `infer = True`, `explain = True`, and `batch_size = 50`.
** use `explain=true` if you want to retrieve explanations from your query ** This was introduced to ensure correct Explanations, without blowing up Transaction memory when not required.

- **Add future-style get for explicit waiting and error handling.**
Since the introduction of asynchronous query processing, the error handling model has become less clear, as an error could be picked up on a line unrelated to its corresponding query. In order to allow clients to explicitly consume query completion, a `get()` method is added to the query result (iterator) which will block until the results are received, or throw an exception on error.
Clients looking to benefit from the asynchronous processing can continue without using the `get()` syntax.


- **Add Explanation.get_rule().**
The `Rule` that corresponds to an explanation is now being returned in the protocol responses, but the python `Explanation` object does not record it. This PR records it if the rule is valid, else sets it to None.

Bugs Fixed

- **Fix explanation throwing an exception.**
A bug was introduced with local concepts that made it impossible to fetch explanations.


Code Refactors

- **Remove implicit, rename date to datetime, rename datatype to valuetype.**
This Change synchronises with changes in Grakn Core (https://github.com/graknlabs/grakn/pull/5722) that remove implicit types, and also updates to no longer use `date`, but instead use `datetime` (including a protocol update). Finally, we also propagate the change from `datatype` to `valuetype.

Other Improvements

Page 6 of 8

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.