How Access History Works
History events related to Access History objects are constructed and published to the history writer API through a scheduled event or ad hoc service. You can configure how frequently Access History runs, but the default is once per day.
The Dispatch Access History task determines the set of IdentityIQ objects to extract from IdentityIQ, which are those that haven't been captured previously or have changed significantly since the last time Access History was executed.
Note
The initial run of Dispatch Access History extracts all identity and related objects, and it can take a long time to execute. The following runs of Dispatch Access History are faster because only changed or new objects will be extracted and processed.
Each extracted object describes all of the information about a single IdentityIQ access history object (e.g., identity, role, account, etc.). An extracted object can also reference other objects. The extracted objects are formatted into JSON and enqueued to the Access History writer service.
The Dispatch Access History task processes the extracted JSON objects, persisting the information into the tables in the access history database used specifically by the Access History user interface (UI).
Key point to remember: Each partition of the Dispatch Access History task periodically saves its progress within its work cycle. This allows another host to resume processing from the last saved state if the original host goes down. The mechanism "restart at last state" ensures that requests don’t need to start over after a crash or failure.
The Access History UI lets you search the access history database. There are also APIs that the UI can use to query against the database, processing Identity, Role, Identity Entitlements, Certifications, Identity Requests, Managed Attributes, Capabilities, Accounts, Workgroups, and Policy Violations.
The system identifies duplicates so they are not processed twice and can distinguish between initial events and change events. Unchanged objects are not processed.
Access History is disabled by default to allow you to configure it. To use this functionality, you need to complete the following:
-
Setting Up Access History Database and Tables
-
Setting Up Access History Task
-
Scheduling the Access History Task
-
Configuring Access History
Setting Up Access History Database and Tables
The database for storing Access History data is separate from the IdentityIQ database. The IdentityIQ install and upgrade scripts create separate databases for IdentityIQ and Access History data. The databases can be within the same instance for convenience, but separate database instances are recommended for production environments to avoid an impact on IdentityIQ performance. Depending on your environment setup and on the number of daily changes to your identities, the Access History database can be large, and will continue to grow.
The separate IdentityIQ Access History database is required, even when the Access History feature is disabled or is not being used. See the IdentityIQ Install Guide.
During the initial run, the Dispatch Access History task collects all necessary objects and populates the Access History database tables. This ensures that on a fresh install or upgrade, you start with a complete access history store.
IdentityIQ Access History database includes the following tables:
-
spt_hist_account_capture
-
spt_hist_accounts
-
spt_hist_assigned_roles
-
spt_hist_capability
-
spt_hist_capability_capture
-
spt_hist_certification
-
spt_hist_cert_remediation_capture
-
spt_hist_detected_roles
-
spt_hist_entitlement_capture
-
spt_hist_entitlements
-
spt_hist_identity
-
spt_hist_identity_capture
-
spt_hist_identity_event
-
spt_hist_identity_req_capture
-
spt_hist_identity_req_item_capture
-
spt_hist_mattr
-
spt_hist_mattr_capture
-
spt_hist_mattr_event
-
spt_hist_object_config_capture
-
spt_hist_policy_violation_capture
-
spt_hist_policy_violation_remediation_bundle_ids
-
spt_hist_policy_violation_remediation_capture
-
spt_hist_policy_violation_remediation_entitlements
-
spt_hist_policy_violations
-
spt_hist_role
-
spt_hist_role_capture
-
spt_hist_role_event
-
spt_hist_workgroup
-
spt_hist_workgroup_capture
-
spt_hist_workgroup_event
Setting Up Access History Task
There is a preconfigured task named Dispatch Access History. This is a scheduled task that runs every day at midnight by default.
Alternatively, you can create your own Access History task to run on your IdentityIQ instance:
-
Navigate to Setup > Tasks.
-
Select the New Task dropdown in the upper right corner.
-
From the dropdown list, select Access History.
-
On the New Task screen, enter a name for your task, and add any other optional field information you would like.
-
Under Access History Options, select AccessHistoryExportConfig from the dropdown.
Note
If you are setting up multiple Access History tasks with slightly different timings, you need to provide a unique YAML for each and input a unique name here for each task that you create.
-
Select Save, Save & Execute, Cancel, or Refresh.
a. Once the save button is selected, you can set the optional task arguments using the debug page like lossLimit, partition and other arguments.
When a previous search is saved to use later, the Saved Searches section displays at the top of the page. A saved search has the following information:
- Once the save button is selected, you can set the optional task arguments using the debug page like lossLimit, partition and other arguments.
The default task arguments added are
Argument Name | Default | Description |
---|---|---|
lossLimit | 2500 | The state of Data Extract Partitions will be snapshotted to its RequestState object each time it processes an additional set of lossLimit objects. |
maxObjectAttempts | 5 | If an object fails to be extracted or published during a run of Access History or Data Extract, that is considered a failed attempt. The failed object will be processed again in subsequent runs of the task if it has failed less than maxObjectAttempts times. |
maxFailuresAbsolute | 500 | If more than maxFailuresAbsolute objects failed to be extracted or published during a run of the task, the task will be marked with an error, the NamedTimestamp date will not be altered, and the failed objects will not be saved. Thus, the next run will be a redo. |
maxFailuresPercent | 5 | If more than maxFailuresPercent percent objects failed to be extracted or published during a run of the task, the task will be marked with an error, the NamedTimestamp date will not be altered, and the failed objects will not be saved. Thus, the next run will be a redo. |
minExtractPartitions | 5 | A hint for the minimum number of data extract partitions to launch. This will be ignored (exceeded) if there are more than maxObjectsPerPartition * minExtractPartitions objects to process. |
maxExtractPartitions | 50 | A hint for the maximum number of data extract partitions to launch. This will be ignored (exceeded) if there are more than maxObjectsPerPartition * maxExtractPartitions objects to process. |
maxObjectsPerExtractPartition | 50000 | The maximum number of objects which will be delegated to a single data extract partition. |
Executing the task looks at what objects are configured to be exported, applies the filter criteria and any limits that you have set and translates all of those objects into JSON documents and publishes them into the Access History database.
If executed, review the Task Results, which display all the differences as well as the attribute statistics. See Viewing Data Extract Task Results(LINK IN DOC).
The results declared for the task are
Result Label | Result Variable | Description |
---|---|---|
Number of Objects qualified for extract | totalObjectMessages | Count of the objects which were qualified for processing. This is the sum of totalModifiedObjectMessages and totalReattemptObjectMessages. Always shown. |
Number of Objects Qualified by Change | totalModifiedObjectMessages | Count of the modified objects which were qualified for processing. Only shown if totalReattemptObjectMessages > 0. |
Number of Objects Qualified by Re-attempt | totalReattemptObjectMessages | Count of the previously failed objects which were qualified for another re-attempt at processing in this run. Only shown if > 0 |
Number of Deletion Objects | totalDeletionExtractedObjects | Count of the rows in spt_intecepted_delete which were attempted to be published for this task. Shown if > 0. |
Deletion Objects Published | totalDeletionExtractedObjectsDispatched | Count of the rows in spt_intecepted_delete which were successfully published by this task. Shown if > 0. |
Number of Objects Processed | totalSeenObjects | Count of the objects which were processed (across all partitions). This does not imply whether or not they were successfully extracted and published -- only that a partition attempted to process it. Only shown if > 0 |
Number of Objects Unprocessed | totalUnseenObjects | If there any objects left unprocessed because one or more partitions were prematurely exited (e.g. due to too many failures), then totalObjectsUnseen is populated with the count of unprocessed objects. Only shown if > 0 |
Number of Objects Successfully Extracted | totalExtractedObjects | Count of the objects which were successfully extracted (across all partitions). Only shown if > 0. |
Number of Objects Not Found | totalExtractedObjectsNotFound | Count of the objects which were not found in the database during extraction (across all partitions). Only shown if > 0. |
Number of Objects that Failed to Extract | totalExtractedObjectsFailed | Count of the objects which encountered exceptions during extraction (across all partitions). Only shown if > 0. |
Number of Objects Successfully Published | totalExtractedObjectsPublished | Count of the objects which were successfully published (across all partitions). Only shown if > 0. |
Number of Objects that Failed to Publish | totalPublishingFails | Count of the objects which encountered exceptions during publishing (across all partitions). Only shown if > 0. |
Number of Abandoned Re-attempts | totalDroppedObjects | Count of the failed objects that have exceeded their re-attempt limit, and will not be attempted again. |
Scheduling the Access History Task
Schedule the Dispatch Access History task to run at a time or cadence you choose. See Tasks Overview(LINK IN DOC) and How to Schedule a Task(LINK IN DOC).
Configuring Access History
The Access History Configuration page is available under Global Settings for users with the appropriate rights, Access History Admin. The page allows you to edit and save the decided options.
-
Navigate to the gear > Global Settings > Access History Configuration page, which includes Access History Controls.
-
Make sure the Enable Access History checkbox is selected.
-
Select Save Changes or Cancel.
Additional configuration may be completed in the Debug page, where the following objects are available:
-
Configuration object "SystemConfiguration"
- accessHistoryEnabled
Set to True to enable Access History.
-
Configuration object "AccessHistoryConfiguration"
- jsonFormat
If set to PRETTY, JSON strings are stored in an easy-to-read format.
If set to MINIFIED, JSON strings are stored in unformatted JSON with white space removed.
If set to ZIPPED (default), JSON strings are compressed.
Note
Due to space considerations, it is advisable to leave jsonFormat set to ZIPPED, unless suggested by support for troubleshooting.
- maxAllowedPatches
A patch capture only captures the differences from the previous capture of the object. It is useful for saving space.
If maxAllowedPatches > 0, patch document support is enabled. This property identifies the number of patch documents that can be saved between full captures, e.g., if the value is 2, every third capture will be a full capture. This pattern can change based on the config property captureMaxAgeInDays.
- captureMaxAgeInDays
If a patch document is older (in days) than the number specified in captureMaxAge, a full capture will be taken regardless of the number of patch documents.
A default Extract YAMLConfig is available for Access History. If you prefer to create a custom configuration, see Configuring Data Extraction(LINK IN DOC).
Note
A severe performance problem can occur when optimizeFailoverEnabled is true, and your hostname resolves to an unreachable IP address.
Troubleshooting Access History Task Failures
If the Access History feature is not enabled, the task completes but the Task Result has a status of Warning and a banner states, "The Access History feature must be enabled in order to support history events."
If the Extract YAML Config is missing the transform configuration name, the status indicates Fail and a banner states, "No transformation configuration name was found in the Export YAMLConfig."
If the Extract YAMLConfig has a bad value for the transform configuration name, or references one that does not exist, the status indicates Fail and a banner states, "Unable to find YAMLConfig AccessHistoryImageConfig2."
If the Extract YAMLConfig is missing a value for extracted objects, the status indicates Warning and a banner states, "The selected Export YAMLConfig is missing entry for exportedObjects."
If the Transform YAMLConfig has a bad value in the extracted objects list, the status indicates Warning and a banner states, "The Transform YAMLConfig is missing imageConfigDescriptor for exportedObject [object]." Note that other extracted objects with valid values will still be processed.
If the Transform YAMLConfig is missing an entry for one of the extracted objects, the status indicates Warning and a banner states, "The Transform YAMLConfig is missing ImageConfigDescriptor for importedObject Certification."
If the reading or processing of InterceptedDelete objects fails for any reason, the status indicates Warning and a banner states, “The row in the spt_intercepted_delete table remains in the database and will be retried during the next run. There is no limit to the number of attempts made for processing an InterceptedDelete row.”
If the Data Extract Processor for the xxxx partition encounters any failures while extracting or publishing objects, the status indicates Warning and a banner states, “The list of failed objects is written to the task's NamedTimestamp object.”
If the number of newly failed objects exceeds the defined thresholds (maxFailuresAbsolute or maxFailuresPercent), the status indicates Warning and a banner states, “The task will fail with an error, and the NamedTimestamp will not be updated (neither the date nor the list of failures).”
If the number of newly failed objects is below the threshold, the task will complete successfully. The status indicates Warning and a banner states, “The NamedTimestamp will be updated with both the date and the list of failed objects.”
Note
During the Extract Partition Creation execution, the list of failed objects (if any) are read from the NamedTimestamp object and are scheduled to be processed by the Data Extract Processors requests, along with any newly changed objects found in IIQ database. However, the number of re-attempts of a failed object is limited by the task argument maxObjectAttempts. When an object is attempted to be processed more than maxObjectAttempts time, it is discarded from the failure list in NamedTimestamp.
Out of Sequence Events
Sometimes messages arrive in Access History out of chronological order. This can happen if there is an error in processing that makes it need to be requeued or if there are multiple consumers and one somehow completes processing a newer message faster than another consumer completes processing an older message. When this happens, the system is set up to properly handle these messages by requeuing the messages in the correct chronological order.
In cases where an image processor entity fails to be processed, the processor handler gives an exception error and the message is requeued for another attempt to be processed. You can configure the redelivery delay and maximum number of redelivery attempts. After the message exceeds the number of redelivery attempts, it is discarded and sent to the dead letter queue. See Configuring Access History.
Occasionally, due to bad data, there is no way to process a message. In that case, it is discarded, logged in the discarded messages log file as INFO, and sent to a dead letter queue.
To allow the timeline to show discarded messages, an optional event called DiscardedMessage can be enabled via the Debug page. The default value is false, but it can be enabled by setting the following Access History property to true:
<entry key="createDiscardedMessageEvent" value="false" />
Access History defaults to allow you to audit removed event records. You may disable this functionality from the Audit Configuration page. These records may be created during out-of-sequence processing where a new capture is inserted between two existing captures. The events generated between the two existing captures must be removed and new events generated. See Audit Configuration(LINK IN DOC).