This page describes the Perform Tasks tab in v41.2. If you are using v41.0-v41.1, see Perform Tasks Tab (v41.0-v41.1).
The Perform Tasks tab within the Tasks section of the application displays the Supervision and QA tasks that are ready to be worked on, broken down by type. To return to the Perform Tasks tab from another page, click the Hyperscience logo in the upper-left corner of the application.
Filters
Use the All Flows drop-down list to filter the Supervision and Quality Assurance tasks by flow.
Supervision Tasks
The Supervision Tasks table displays all manual task types based on your permissions and the selected flow.
Supervision task type - Lists the available tasks you can work on (e.g., Document Classification, Identification, Transcription, Flexible Extraction, Custom Supervision). Learn more about each task type in Supervision Tasks.
Total tasks - Indicates the number of pending tasks for each type.
Overdue - Displays all overdue tasks. Learn more about SLAs in Prioritizing Submissions.
Actions - The Perform Tasks button allows you to start working on a specific type of tasks.
Quality Assurance Tasks
The QA (Quality Assurance) Tasks table shows task types where you review and verify the accuracy of manual or machine work, depending on the settings and the use case.
QA task type - Lists the available tasks you can work on (e.g., Document Classification QA, Identification QA, Transcription QA, Full Page Transcription QA, Vision Language Model QA). Learn more about each task type in Quality Assurance Tasks.
Total tasks - Indicates the number of pending QA tasks for each type.
Actions
Click the menu (
) to Clear QA tasks for a specific type.
Click the Perform Tasks button to start working on the available QA tasks.
Task count
The task count shows how much work is currently available, displayed as the number of pending tasks next to each Supervision and QA task type.
Task count limits
You can control how task counts are displayed by adjusting the
TASK_LIMITS_PERFORM_TASKS_PAGE
setting in the “.env” file. For example, if you set a limit of 1000, and the number of tasks reaches or exceeds 1000, the UI will show 999+ tasks for the specific task type.By default, this limit is set to 10,000 tasks.
Tasks types
Supervision tasks
Document Classification - Categorize and combine pages that were not classified by the machine. To learn more, see Document Classification.
Identification - Annotate the fields and tables in your documents, depending on the use case. Learn more in Field Identification and Table Identification.
Transcription - Extract the data you need by transcribing the values from annotated fields and tables. To learn more, see Transcription.
Flexible Extraction - Used for Structured pages manually matched to a layout, or documents routed via custom validation rules. These tasks are similar to Transcription tasks but are driven by layout logic. To learn more, contact your Hyperscience representative.
Custom Supervision - Covers documents routed to a custom flow that includes a Custom Supervision Block. The content and format of these tasks are specific to the flows they are configured for. Learn more in Custom Supervision, and contact your Hyperscience representative for more details.
Quality Assurance tasks
Document Classification QA - Review and validate the accuracy of human or machine input, based on your settings. To learn more, see Automatic Document Classification and Accuracy.
Identification QA - Review and validate the accuracy of the input, based on the settings and the use case. Learn more in Field Identification Quality Assurance and Table Identification Quality Assurance.
Transcription QA - Review and validate the accuracy of the transcribed data. Learn more in Transcription Accuracy and Automation.
Full Page Transcription QA (FPT QA) - Determine the accuracy of the Full Page Transcription block's output by reviewing and validating the block’s transcriptions. To learn more, see Full Page Transcription Quality Assurance.
Visual Language Model QA (VLM QA) - Review transcriptions and indicate whether they are correct or incorrect. The data from these tasks is then used to calculate the accuracy of the Vision Language Model’s output. For more information, see Vision Language Model Quality Assurance.