Resources

LogQS workflows center around creating, reading, updating, and deleting resources which are organized into four primary categories:

  • Core Resources

  • Process Resources

  • Object Storage Resources

  • Organization Resources

  • User Resources

Most LogQS interactions will occur primarily with the Core resources, where we read log data.

Process resources are used to manage the ingestion and digestion of log data.

Object storage resources are used to manage the storage and access of log data.

Organization resources are used to organize log data and process interactions.

User resources are used to manage users and their permissions.

Common Fields

Many of the resources in LogQS share common concepts and fields which work consistently across all resources. This section describes these common concepts and fields.

ID Fields

Nearly all resources have an id field which is a unique identifier for the resource. The id field is a UUIDv4 string which is generated by LogQS when the resource is created. The id field is immutable and cannot be changed after the resource is created. The id field is used to reference the resource in API requests and responses.

A noteable exception to this are record resources, which don’t have an id field. Instead, records are identified by a combination of their timestamp field, which is unique within a topic, and their topic_id, which identifies the topic to which the record belongs.

Timestamp Fields

Timestamps are used throughout LogQS to represent the time at which an event occurred. Timestamps are represented as a 64-bit integer representing the number of nanoseconds since the Unix epoch (January 1, 1970, 00:00:00 UTC). This representation allows for high precision and is suitable for representing timestamps in a wide range of applications.

Although Record resources are the only resource with an explicit timestamp field, timestamps are also used in other resources to represent the time at which an event occurred. For example, the start_time and end_time fields on Log and Topic resources represent the earliest and latest record times, respectively, of the records associated with the log or topic. Similarly, timestamps may be used as query parameters, such as the timestamp_gt and timestamp_lt parameters used to filter records with a timestamp greater than or less than a given value (respectively).

Audit Fields

Nearly all resources have audit fields which are used to track the creation and modification of the resource. The audit fields include:

  • created_at: An ISO 8601 datetime string representing the time at which the resource was created.

  • created_by: A UUID representing the ID of the user who created the resource.

  • updated_at: An ISO 8601 datetime string representing the time at which the resource was last updated. This field is automatically updated when the resource is modified.

  • updated_by: A UUID representing the ID of the user who last updated the resource. This field is automatically updated when the resource is modified.

  • deleted_at: An ISO 8601 datetime string representing the time at which the resource was deleted. This field is only populated when the resource is deleted.

  • deleted_by: A UUID representing the ID of the user who deleted the resource. This field is only populated when the resource is deleted.

These fields are automatically populated by LogQS when the resource is created, updated, or deleted. The created_by, updated_by, and deleted_by fields are populated with the ID of the user who performed the action, while the created_at, updated_at, and deleted_at fields are populated with the current time. These fields are only used for reference and cannot be modified by the user.

Sometimes the system, itself, will create, update, or delete resources. In such cases, the created_by, updated_by, and deleted_by fields will be null and the created_at, updated_at, and deleted_at fields will be populated with the time at which the action occurred.

Lock Fields

Some resources can be locked, meaning they cannot be modified or deleted unless a lock token is provided with the request. This is useful for preventing accidental modification or deletion of resources that are beung used by other processes or users. There are four lock fields:

  • locked: A boolean indicating whether the resource is locked. If true, the resource cannot be modified or deleted unless a lock token is provided.

  • lock_token: A string representing the lock token for the resource. If the resource is locked, this field must be provided with any request to modify or delete the resource.

  • locked_at: An ISO 8601 datetime string representing the time at which the resource was locked. This field is only populated when the resource is locked.

  • locked_by: A UUID representing the ID of the user who locked the resource. This field is only populated when the resource is locked.

The lock token is a randomly generated string that is used to prevent accidental modification or deletion of the resource. The lock token is generated by LogQS when the resource is locked and must be provided with any request to modify or delete the resource. If the lock token is not provided, or if it does not match the lock token on the resource, the request will fail with a 403 Forbidden error.

Process Fields

Some resources are associated with processes, such as ingestions and digestions. These resources have additional fields which are used to track the progress and state of the process. These fields include:

  • state: A string representing the current state of the process. The possible values are ready, queued, processing, finalizing, complete, failed, and archived.

  • previous_state: A string representing the previous state of the process. This field is only populated when the state changes.

  • transitioned_at: An ISO 8601 datetime string representing the time at which the process state was last changed. This field is automatically updated when the state changes.

  • progress: A float between 0 and 1 representing the progress of the process.

  • workflow_id: A UUID representing the ID of the workflow that is being used to process the resource.

  • workflow_context: A JSON object containing any additional context that is useful for the workflow. This field is only populated when the workflow accepts arguments which can be supplied by the user.

  • error_name: A string representing the name of the error that occurred during the process. This field is only populated when the process fails.

  • error_message: A string representing the error message that occurred during the process. This field is only populated when the process fails.

  • error_payload: A JSON object containing any additional data that is useful for understanding the error. This field is only populated when the process fails.

The state field is a string representing the current state of the process. The possible values are:

  • ready: The process is ready to be processed (the default state).

  • queued: The process is queued to be processed. The user should transition the process to this state when they are ready for it to be processed.

  • processing: The process is currently being processed. The user should not transition the process to this state.

  • finalizing: The process transitions the process to this state when it is finished processing creating process parts. Once the process parts are complete, the process will transition to the complete state.

  • complete: The process is complete. The process parts have been processed, and the data from the process has been added to the log. The process should remain in this state indefinitely until the user archives it.

  • failed: The process failed to complete. The process should remain in this state indefinitely until the user archives it or re-queues it.

  • archived: The process has been archived. The process should remain in this state indefinitely until the user deletes it.

Reference Fields

Many resources in LogQS have fields dedicated to providing users with a place to store reference data. These fields are typically free-form text or JSON fields that can be used to store any additional information that is useful for the resource. These fields include:

  • name: A string representing the name of the resource. This field is intended to be used for human-readable names and can be used to store any information that is useful for the resource.

  • note: A free-form text field that can be used to store any additional information about the resource. This field is intended to be used for human-readable notes and can be used to store any information that is useful for the resource.

  • context: A JSON object that can be used to store any additional information about the resource. This field is intended to be used for structured data and can be used to store any information that is useful for the resource.

Control Fields

Some resources are used to dictate the behavior of certain actions and interactions with LogQS. These fields are typically used to control the behavior of the resource and can be used to enable or disable certain features. These fields include:

  • disabled: A boolean indicating whether the resource is disabled. If true, the resource is disabled and cannot be used. This field is typically used for resources that can be temporarily disabled, such as workflows or hooks.

  • managed: A boolean indicating whether the resource is managed by LogQS. If true, the resource is managed by LogQS and cannot be modified or deleted by the user. This field is typically used for resources that are created and managed by LogQS.

  • default: A boolean indicating whether the resource is the default resource for the type. If true, the resource is the default resource and will be used by LogQS when no other resource is specified.

Core Resources

Logs

Logs are the primary resource in LogQS. All other core and process resources are always associated with one log. Logs can be seen as containers for topics (which can be seen as containers for records). Before any core data can be created, inlcuding ingested, a log must be created first.

Logs belong to one group (referenced via the group_id field), specifed on creation. Logs can be moved between groups. A log’s name field must be unique within a group, but are otherwise arbitrary and only used for reference. Log names can be changed after creation.

The log’s start_time and end_time fields are automatically updated when new records are added to or removed from the log. The start_time is the earliest record time, and the end_time is the latest record time. The log’s duration field is automatically updated when new records are added to or removed from the log and represents the difference between the end_time and start_time fields.

The record_count and record_size fields are also automatically updated as records are added or removed from the log. The record_count field is an integer representing the number of records in the log and the record_size field is an integer representing the total size of all records in the log in bytes as measured by the record’s database size. The record size is not the size of the underlying message data, but rather the size of the record’s database representation.

The log’s base_timestamp field is an integer representing a nanosecond which is to be added to all record times in the log. The base_timestamp field is intended to be used to correct for occurences where the log’s recorded times are not in-sync with the real-world time, e.g., a log was recorded on a machine without a synchronized clock, so it’s record times start at 0 and are off by some constant offset.

Topics

Topics are resources which constitute the main sub-resources of logs. Topics can be thought of as containers for records, where all records associated with a topic are similar in tmers of the data they contain and/or represent. Each topic is associated with one, and only one, log.

Topics must have a unique name field within a log, but are otherwise arbitrary and only used for reference. Topic names can be changed after creation. Topics can also optionally be associated with one other topic within the same log via the associated_topic_id field. This is used for reference, and is useful for keeping track of relationships between topics such as when the records of one topic are derived from data from another topic.

Each topic has a set of fields indicating how the record data within the topic should be interpreted. This includes type_name, which is a string identifier for the type, type_encoding, which is a string indicating how the topic’s record’s data is encoded, type_data, which is a string providing reference for how the record data is structured, and type_schema, which is a JSON schema representing the structure of the record data. These fields need not be populated, but are useful for validation and interpretation of the record data as well as for the automated population of the record data. These fields may also be used in digestion processes, such as for extraction processes which write record data to new ROS bags or MCAP files.

In the context of typical robotics log data (such as ROS bags or MCAP files), the topic’s name might be the topic or channel name from the log file, the type_name might be the message type (e.g., sensor_msgs/Image, sensor_msgs/msg/Image, etc.), the type_encoding might be the serialization format (e.g., ros1, cdr, etc.), the type_data might be the full message definition, and the type_schema might be a JSON schema representing the message structure which can be used to validate the record data or by external applications.

Similar to logs, topics contain information about the number of records associated with the topic as well as the size of those records. That is, the sum of record_count and record_size across all topics in a log will be equal to the log’s record_count and record_size. Similarly, the start_time and end_time of a topic are the earliest and latest record times, respectively, of the records associated with the log. The start_time of the topic’s log will be the earliest start_time of all topics in the log, and the end_time of the topic’s log will be the latest end_time of all topics in the log. Topics also have a base_timestamp field which is an integer representing a nanosecond which is to be added to all record times in the topic, similar to the log’s base_timestamp field.

Records

Records are the most granular core resource in LogQS. Records represent the actual data points corresponding to the messages found in log data which are indexed in LogQS. Each record is associated with one, and only one, topic.

Every record has a populated timestamp field representing the nanoseconds since the Unix epoch at which the record was recorded. Within a topic, the timestamp field is unique, and records are naturally sorted by timestamp in ascending order.

Records effectively contain index information about the messages in the underlying log data. The data_offset field is a non-negative integer representing the byte offset of the start of the record’s underlying message data in its source log file, while the data_length field is a non-negative integer representing the length of the record’s underlying message data in bytes. That is, if you were to read data_length bytes starting at data_offset from the source log file (assuming it’s uncompressed), you would get the record’s underlying message data (which could then be deserialized based on the type information found the record’s topic).

Some log formats support compression which require us to fetch more than just the record’s underlying message data to extract the message data. In these cases, the data indexed by the data_offset and data_length represents a “chunk” of data which contains the necessary data needed to derive the actual message data. If this is the case, the chunk_compression will be populated with a string indicating the compression algorithm used to compress the chunk, and the chunk_offset and chunk_length fields will be populated. In this case, the chunk_offset field will be a non-negative integer representing the byte offset of the start of the record’s message data within the uncompressed chunk, while the chunk_length field will be a non-negative integer representing the length of the record’s message data within the uncompressed chunk.

To locate the object which contains the record data, one would refer to the record’s ingestion_id to find the ingestion which created the record which will contain a reference to the location of the ingested object. Some log formats may have the log data stored across multiple objects. In this case, the source field on the record will be populated with a string indicating the relative path to the record’s data from the ingested object.

Records have three “data” fields: query_data, auxiliary_data, and raw_data.

The query_data field is a JSON object containing the data that can be used when querying records through the record list endpoint. This data is unstructured and unvalidated, but will typically contain a representation of the underlying message data. When possible, this field is populated during ingestion based on this data in a best effort manner. The size of the query_data object is limited on a per-record basis which is configured on a per DataStore basis. During ingestion, if the message data is too large to fit in the query_data field, the query_data field may either be populated with a subset of the data or left empty. The query_data field can be modified and should not be relied upon for critical, persistent data.

The auxiliary_data field is a JSON object containing any additional data that is useful for the record, but cannot be used for querying. The data which may be foud in the auxiliary_data field is not stored in the database; rather, this data is fetched externally depending on the context. By default, when fetching or listing records, the auxiliary_data field is not populated to avoid unnecessary overhead and data transfer. It can be populated by passing the include_auxiliary_data query parameter when fetching or listing records.

The raw_data field is a string containing a “raw” representation of the underlying data. This field is not used for querying, but can be useful for fetching underlying message data through the record endpoints. Similar to the auxiliary_data field, the raw_data field is not populated by default when fetching or listing records, but can be populated by passing the include_raw_data query parameter when fetching or listing records. If the underlying data can be represented as a string, it will populate the raw_data field directly. If the underlying data is binary, the raw_data field will be populated with a base64 encoded string.

Process Resources

Ingestions

Ingestions are resources which track ingestion processes. An ingestion process is the primary method of creating topics and records for logs. Valid ingestions are associated with one, and only one, object as well as one, and only one, log.

Ingestions are the primary way to get log data into LogQS. An ingestion process will typically read data from a log object, such as a ROS bag or MCAP file, and create topics and records in LogQS based on the data in the log object.

Ingestions belong to exactly one log (referenced via the log_id field). Ingestions can not be moved between logs. The ingestion name field is not unique and is only used for reference (a common strategy is to use the object name as the ingestion name).

An ingestion must point to a single object via the object_key and object_store_id fields in order to be queued.

Ingestion Parts

Ingestion parts are sub-resources of ingestions and are, themselves, processes as well. During ingestion, ingestion parts are created and contain record index information which is stored in the part’s index field. When an ingestion part processes, the record index information is used to fetch the underlying message data from the source log file and create records in LogQS. Ingestion parts are created and managed by the ingestion process and shouldn’t generally be created or managed directly by the user.

Ingestion parts include a nullable source field which is a string containing the relative path to the part’s data from the ingested object. This is useful for log formats which store data across multiple files, such as ROS bags or MCAP files. The source field is only populated when the ingestion part is created from an ingested object which contains multiple files.

Digestions

Digestions are resources which track digestion processes. A digestion process is used to transform or extract data from existing topics and records in LogQS. Valid digestions are associated with one, and only one, log.

When a digestion is created, it is expected that digestion topics will be created and associated with the digestion. These topics can only be created while the digestion is in a read state. Once the digestion transitions from the ready state, no more digestion topics can be created for the digestion and existing digestion topics for the digestion can’t be modified.

During a digestion process, the digestion will collect record index information for the records specified via the digestion topics. This information is then stored on associated digestion parts for the digestion. The digestion process then uses this information to handle the actual digestion process.

Digestion Topics

Digestion topics are sub-resources of digestions. They are used to specify which records should be included in the digestion process.

Each digestion topic is associated with one, and only one, topic in the log via the topic_id field. Each digestion topic also has a start_time and end_time field indicating the time range of records which should be included in the digestion. These fields are nullable, which indicates no limit on the time range (i.e., if both fields are null, then all records in the topic are included in the digestion; if just the start_time is null, then all records up to the end_time are included in the digestion; if just the end_time is null, then all records after the start_time are included in the digestion).

Digestion topics also have a frequency field which is a float representing the frequency at which records should be included in the digestion. This field is used to specify a sampling rate for the records. For example, a frequency of 0.1 indicates that 0.1 records per second should be included in the digestion. This field is useful for reducing the amount of data that needs to be processed during the digestion.

Digestion topics also have query_data_filter and context_filter fields which are JSON objects containing filters that can be applied to the records in the topic. The query_data_filter field is used to filter records based on their query_data field, while the context_filter field is used to filter records based on their context field. These fields are useful for selecting specific records based on their content and context.

Digestion Parts

Digestion parts are sub-resources of digestions and are, themselves, processes as well. During the digestion’s processing state, digestion parts are created and contain record index information which is stored in the part’s index field. When a digestion part processes, the record index information is used to fetch the underlying message data from the source log file and create records in LogQS. Digestion parts are created and managed by the digestion process and shouldn’t generally be created or managed directly by the user. Digestion parts have a sequence field which is an integer representing the order in which the part should be processed. This field is used to ensure that the parts are processed in the correct order based on time.

When a digestion part processes, a “record blob” is generated and stored in the associated object store for the digestion. The record blob contains message data for the records specified by the digestion part’s index. The record blob is a binary file which contains the message data for the records in a format that is optimized for storage and retrieval. This object is used in downstream tasks to more effeciently fetch the message data for the records in the digestion. When a digestion transitions to a completed state, the record blobs are deleted.

Object Storage Resources

Object Stores

Object stores are resources which represent external storage systems where log data is stored, such as AWS S3 buckets. To ingest from or digest to an object store, an object store must be created and configured with the necessary credentials and settings.

Object stores are configured with the following fields:

  • bucket_name: The name of the bucket in the object store where the log data is stored.

  • region_name: The region where the object store is located (e.g., us-east-1).

  • access_key_id: The access key ID used to authenticate with the object store.

  • secret_access_key: The secret access key used to authenticate with the object store.

  • endpoint_url: The optional URL of the object store’s API endpoint (if not populated, the default AWS S3 endpoint will be used).

When a user creates an object store, they submit access key credentials. The secret_access_key is encrypted in the database and can’t be fetched or changed after creation.

The user must create IAM credentials providing access to the underlying bucket in AWS. At a minimum, LogQS needs to be able to read objects from the bucket. The following policy can be used to create a policy which allow LogQS to list and read objects from the bucket. The user should replace <BUCKET NAME> with the name of their bucket.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "LogQSReadOnlyAccess",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:ListBucketMultipartUploads",
                "s3:ListBucket",
                "s3:ListMultipartUploadParts"
            ],
            "Resource": [
                "arn:aws:s3:::<BUCKET NAME>/*",
                "arn:aws:s3:::<BUCKET NAME>"
            ]
        }
    ]
}

LogQS also has the ability to write objects to an object store, assuming the necessary permissions are in place. The following policy can be used to create a policy which allows LogQS to write objects to the bucket. The user should replace <BUCKET NAME> with the name of their bucket.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "LogQSWriteOnlyAccess",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:ListBucketMultipartUploads",
                "s3:ListMultipartUploadParts",
                "s3:AbortMultipartUpload"
            ],
            "Resource": [
                "arn:aws:s3:::<BUCKET NAME>/*",
                "arn:aws:s3:::<BUCKET NAME>"
            ]
        }
    ]
}

Combining these two policies, the user can create a policy which allows LogQS to read and write objects to the bucket.

An object store has a read_only field which is a boolean indicating whether the object store is read-only. If true, the object store can only be used for reading objects and not writing objects. LogQS enforces this, so even if the policy allows writing, setting this field to true will prevent writing to the object store.

Object stores also have a key_prefix field which is used to programmtically “scope” which objects can be accessed by the LogQS. This is useful for partioning data within the object store. The key_prefix is a string which is prepended to the object key when accessing objects in the object store. For example, if the key_prefix is logs/, and the object key is my_log.bag, then the full object key will be logs/my_log.bag. If a user supplies an object_key which already contains the prefix, then the prefix will not be prepended again. Similarly, any operations performed on objects in the object store will automatically have the prefix prepended to the object key. For example, if the key_prefix is logs/ and a user lists all objects in the object store, then the user will only be able to see objects with keys that start with logs/. If the user tries to list objects with keys that do not start with logs/, they will not be able to see those objects.

Objects

Objects are files stored in an object store. LogQS interacts with object stores on behalf of the user when reading or writing objects; that is, LogQS simply acts as an interface to the object store and isn’t “aware” of the contents of the object store until it explicitly reads or writes to it. This means that objects which are uploaded outside the context of LogQS will be accessible via LogQS when objects from that store are read. Similarly, if objects are deleted or modified outside the context of LogQS, LogQS will not be aware of these changes until it explicitly reads or writes to the object store.

Objects have fields directly corresponding to what you’d find in the underlying object store:

  • key: The key of the object in the object store.

  • size: The size of the object in bytes.

  • etag: The ETag of the object in the object store. This is a string which is used to identify the object and is typically a hash of the object’s contents.

  • last_modified: The datetime at which the object was last modified in the object store. This is an ISO 8601 datetime string.

Additionally, objects have an optional, virtual presigned_url field which is populated when a user fetches an object. This field can be used to download the actual object data.

Objects also have an upload_state field, which is a string representing the state of the object in the object store, with possible values:

  • processing: The object is currently being uploaded through LogQS.

  • complete: The object has been uploaded and is ready to be read.

  • aborted: The object’s upload was aborted.

When an object is created through LogQS, it is initially in the processing state. In the underlying object store, it is created as a multipart upload. The user then created object parts for the object, uploading the data for each part. Once all parts are uploaded, the user can complete the multipart upload by updating the object’s upload_state to complete.

Object Parts

Object parts are sub-resources of objects. They represent the individual parts of a multipart upload to an object store.

When creating a part, the user must supply the part’s part_number and size. The part_number is an integer representing the order of the part in the multipart upload, while the size is an integer representing the size of the part in bytes. The created part contains a presigned_url field which can be used to upload the data for the part.

For object uploads, LogQS effectively just exposes the underlying multipart upload mechanisms through its API. The process, limits, and requirements of this process are dictated by the underlying object store. More information about the multipart upload process can be found in the AWS S3 documentation.

Log Objects

Log objects are just an abstraction of objects. They are used to represent objects which are associated with a specific log in the DataStore for the default object store. Interacting with log objects is equivalent to interacting with objects in the default object store with a configurable prefix based on the log’s ID (by default, this is logs/<Log ID>/). This is useful for keeping track of objects which are associated with a specific log and for partitioning data within the object store.

Organization Resources

Groups

Groups are simple containers for logs. All logs belong to exactly one group. Groups are used to organize logs and provide a way to manage access to logs.

It is suggested to group logs into groups based on some commonality, such as the project or system they belong to, the day they’re created, or the team that owns them. Groups can be used to manage access to logs, as users can be granted access to groups, which will give them access to all logs in the group. It is permissable to have all logs exist in a single group, but this is not recommended as it can make it difficult to find logs and manage access to them.

Labels

Labels are simple resources which are used to provide values for tags.

Each label has a unique value string field which should be be meaningful to the user. Labels also have an optional category field which is a string that can be used to group similar labels together. For example, you might have labels for different types of weather (with values like “sunny”, “rainy”, “cloudy”, etc.) and use the category field to group them together under a “weather” category. Similarly, you might have labels for different types of vehicles (with values like “car”, “truck”, “motorcycle”, etc.) and use the category field to group them together under a “vehicles” category. These labels can then be applied to logs and topics through tags.

Tags

Tags effectively just join labels to logs and topics.

A tag must be associated with one, and only one, log (through the log_id field). A tag can optionally be associated with a topic within that log (through the topic_id field).

Tags have an optional start_time field and an optional end_time field. These fields can be used to specify the time range during which the tag is applicable. If both fields are null, then we interpret the tag as being applicable to the entirety of the log or topic. If just the start_time is null, then the tag is applicable to all records up to the end_time. If just the end_time is null, then the tag is applicable to all records after the start_time.

Workflows

Workflows are resources which are used to manage external processing of ingestions and digestions. Workflows are effectively just containers of hooks.

Each process is associated with one workflow, and as the process transitions through different states, the hooks associated with the workflow are called. This provides a means for handling processes outside the context of LogQS.

Workflows can either be assigned to ingestions or digestions, but not both. This is enforced via the workflow’s process_type field, which must be either ingestion or digestion. This field can’t be changed after creation.

Hooks

Hooks are resources which represent external webhooks that are called when a process transitions to a new state. Each hook is associated with one, and only one, workflow via its workflow_id field.

Hooks have a trigger_process field which is one of:

  • ingestion

  • ingestion_part

  • digestion

  • digestion_part

This field indicates which process the hook is associated with. The trigger_process field is used to determine when the hook should be called. For example, if the trigger_process is ingestion, then the hook will be called when the ingestion transitions to a new state. If the trigger_process is ingestion_part, then the hook will be called when an ingestion part transitions to a new state.

Hooks also have a trigger_state field which is one of:

  • ready

  • queued

  • processing

  • finalizing

  • complete

  • failed

  • archived

This field indicates which state the process must transition to for the hook to be called. For example, if the trigger_state is complete, then the hook will only be called when the process transitions to the complete state.

Hooks also have a uri field which is a string containing the URL of the webhook. When the hook is called, LogQS will make an HTTP POST request to this URL with a JSON payload containing information about the process and its current state. When a hook is created, an optional secret field can be provided. This secret is used to sign the payload, allowing the receiving service to verify that the request came from LogQS. The secret is a string which is generated by LogQS and is unique to the hook. The secret is not stored in the database and cannot be retrieved after creation. If a secret is not provided, LogQS will not sign the payload.

User Resources

Roles

Roles are resources which represent a set of permissions that can be assigned to users. Roles are used to control access to resources in LogQS. Each user can be assigned up to one role.

Roles have a policy field which is a JSON object containing the permissions that are granted to the role. A policy is a list of permission statements, each of which having the following fields:

  • effect: A string which is either allow or deny. This field indicates whether the statement allows or denies the specified actions.

  • actions: A list of strings representing the actions that are allowed or denied by the statement. Actions are one of:
    • read: Allows read access to the resource.

    • write: Allows write access to the resource.

    • create: Allows creating the resource.

    • list: Allows listing the resource.

    • delete: Allows deleting the resource.

    • fetch: Allows fetching the resource.

    • update: Allows updating the resource.

    • *: Allows all actions on the resource.

  • resource: A list of strings representing the resources that the statement applies to (e.g., log, topic, etc.)

  • filter: A JSON object containing additional filters that can be applied to the statement. This field is nullable and can be used to further restrict the actions that are allowed or denied by the statement.

An example of a basic policy which allows all users to read all resources would look like:

{
    "statement": [
        {
            "effect": "allow",
            "action": [
                "read"
            ],
            "resource": [
                "*"
            ],
            "filter": {}
        }
    ]
}

A more sophisticated policy might be one which allows users to read only select resources and to create digestions:

{
    "statement": [
        {
            "effect": "allow",
            "action": [
                "read"
            ],
            "resource": [
                "group",
                "log",
                "topic",
                "record",
                "label",
                "tag",
                "user",
                "digestion",
                "digestion_part",
                "digestion_topic",
                "workflow"
            ],
            "filter": null
        },
        {
            "effect": "allow",
            "action": [
                "write"
            ],
            "resource": [
                "digestion",
                "digestion_part",
                "digestion_topic"
            ],
            "filter": null
        }
    ]
}

A user with an assigned role will only be able to perform actions on resources that are allowed by the role’s policy _unless_ the user is an admin, in which case the role is enforcement is bypassed. If a user tries to perform an action that is not allowed by their role, they will receive a 403 Forbidden error.

Users

Users represent individuals who have access to LogQS. Each user has a unique username field which is used to identify them. In Studio, the username is typically the user’s email address.

Users have an admin field which is a boolean indicating whether the user is an admin. Admin users have full access to all resources in LogQS and can perform any action. Non-admin users are subject to the permissions defined in their assigned role.

Although users can be associated with a human individual, they need not be. For example, a user could represent a service or application that needs access to LogQS. In this case, the username could be something like my_service and the user would be granted access to the resources they need to interact with.

API Keys

API keys are resources which represent a means of authenticating with LogQS. API keys are used to authenticate requests to the LogQS API and are typically used by external applications or services that need to interact with LogQS.

When creating an API key, the user must provide a unique name field which is used to identify the key as well as a user_id referencing the user for whom the API key authenticates as. The response from the API key creation will include a secret field which is a string containing the actual API key. This secret is encrypted in the database and cannot be retrieved after creation. If the user loses their API key, they must create a new one. For programmatic access, the user must supply the API key’s ID and secret in the Authorization header of their request. The format of the header is:

Authorization: Bearer <Base64Encoded(API_KEY_ID:API_KEY_SECRET)>

In Python,

import base64

api_key_id = "my_api_key_id"
api_key_secret = "my_api_key_secret"

headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer " + base64.b64encode(f"{api_key_id}:{api_key_secret}".encode()).decode(),
}