Rqdb

Latest version: v1.6.1

Safety actively analyzes 682334 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 4

1.6.1

- Refactors read consistency constants to support linearizable reads (a faster variant of `strong`). This only changes types for the client; all the relevant work and details is in rqlite supporting this read consistency.
- Updates BulkResult and ResultItem to include `time` parameter indicating the time in seconds spent on the server to handle the full and individual requests respectively, via the timings query parameter
- Updates slow query logging method to be provided a `result` keyword argument (the BulkResult), if the handler can receive it as a keyword only argument or has a variable keyword argument parameter (e.g **kwargs)
- Changes slow query logging trigger to be based on server time, not local time
- Changes log format to include total local time spent (as before), total round trip time on just the last request (in the event of retries), and server time if available. Generally these three numbers give a much better at a glance understanding of whats happening:
- If local total time is high but not last request or server time, then the request was slow due to retries
- If the last request round trip time was high but the server time was low, the request was slow due to the network or client cpu contention
- If the server time was high, the request was slow because of the actual work related to the queries inside

1.6.0

Resolves unified queries never specifying read consistency, freshness, or redirect in the HTTP request, leading to inconsistent behavior with executemany or execute.

Also adds support for slow query reporting which is based on the wall time between starting the request to the final successful host and the reading the headers from the final successful host. This is disabled by default and does not have default message formatting.

Example of enabling slow message logging:

py
import rqdb
import rqdb.logging

def on_slow_query(
info: rqdb.logging.QueryInfo,
/,
*,
duration_seconds: float,
host: str,
response_size_bytes: int,
started_at: float,
ended_at: float
):
print(f"Slow query with operations {info.operations} took {duration_seconds:.3f}s via {host}")


conn = rqdb.connect(
HOSTS,
log=rqdb.LogConfig(
slow_query={
"enabled": True,
"threshold_seconds": 0,
"method": on_slow_query,
}
),
)

1.5.0

Adds support for the unified endpoint, for when you want to mix inserts and queries in the same transaction, usually as an easier way to determine why an update failed.

For example, given the following schema:

sql
CREATE TABLE users (
id INTEGER PRIMARY KEY,
uid TEXT UNIQUE NOT NULL,
);

CREATE TABLE projects (
id INTEGER PRIMARY KEY,
uid TEXT UNIQUE NOT NULL,
user_id INTEGER NOT NULL REFERENCES users(id) ON DELETE CASCADE,
name TEXT NOT NULL
);

CREATE INDEX projects_user_idx ON projects(user_id);


Then a query to set a projects name by uid, but only if it's owned by the user with the given name, for which you want to distinguish between the project not existing and it being owned by a different user, might be accomplished as follows:

py
import rqdb
import rqdb.connection
import dataclasses

class ProjectDoesNotExistError(Exception):
def __init__(self, uid: str) -> None:
super().__init__(f"there is no project with {uid=}")
self.uid = uid

class ProjectHasDifferentOwnerError(Exception):
def __init__(self, uid: str, required_owner: str, actual_owner: str) -> None:
super().__init__(f"project {uid=} is owned by {actual_owner=} but was requested by {required_owner=}")
self.uid = uid
self.required_owner = required_owner
self.actual_owner = actual_owner

def set_project_name(conn: rqdb.connection.Connection, uid: str, name: str, user: str) -> None:
cursor = conn.cursor()
response = cursor.executeunified3(
[
(
"""
UPDATE projects
SET name = ?
WHERE
projects.uid = ?
AND EXISTS (
SELECT 1 FROM users
WHERE
users.id = projects.user_id
AND users.uid = ?
)
""",
(name, uid, user)
),
(
"""
SELECT
users.uid
FROM projects, users
WHERE
projects.uid = ?
AND users.id = projects.user_id
""",
(uid,)
)
]
)
if response[0].rows_affected is not None and response[0].rows_affected > 0:
return

if not response[1].results:
raise ProjectDoesNotExistError(uid)

raise ProjectHasDifferentOwnerError(uid, user, response[1].results[0][0])

1.4.1

This ensures redirects are always managed by the client rather than internally by the cluster

1.4.0

This release causes backups to attempt leader discovery before backing up, and use the leader for the backup where possible. This is a 10-20x performance improvement on rqlite v8.15.0, with dramatically reduced memory usage on the cluster.

Adds discover_leader() function to rqlite connections, and `initial_host` optional parameter for `fetch_response`

1.3.0

Adds a new function, `cursor.explain`, which acts similiarly to `cursor.execute`, except it prefixes the query with `EXPLAIN QUERY PLAN` (if not already an EXPLAIN query) and returns a string result containing the formatted query plan (formatted similarly to the sqlite3 API).

Also supports writing the query plan to stdout via the query parameter `out='print'`, or returning the tree representation with `out='plan'`, or writing to any bytes writable stream. For the async version, supports both synchronous writes and asynchronous writes, similar to backup.

py
import rqdb

conn = rqdb.connect(['127.0.0.1:4001'])
cursor = conn.cursor()
cursor.execute('CREATE TABLE persons (id INTEGER PRIMARY KEY, uid TEXT UNIQUE NOT NULL, given_name TEXT NOT NULL, family_name TEXT NOT NULL)')
cursor.explain("SELECT id FROM persons WHERE TRIM(given_name || ' ' || family_name) LIKE ?", ('john d%',), out='print')
--SCAN persons
cursor.execute("CREATE INDEX persons_name_idx ON persons(TRIM(given_name || ' ' || family_name) COLLATE NOCASE)")
cursor.explain("SELECT id FROM persons WHERE TRIM(given_name || ' ' || family_name) LIKE ?", ('john d%',), out='print')
--SEARCH persons USING INDEX persons_name_idx (<expr>>? AND <expr><?)

Page 1 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.