Skip to content

Configuring and Scheduling the OneDrive Permissions Collection

Permissions can be analyzed to determine the application permissions of an out-of-the-box application, provided you have defined an identity store for Data Access Security to use in its analysis, and you have run a crawl for the application.

Configuring the Permission Collection

The permission collector is a software component responsible for analyzing the permissions in an application.

Note

If the Data Access Security Central Permission Collector wasn’t installed during server installation, this configuration setting will be disabled.

To configure the permission collector:

  1. Go to Admin > Applications.
  2. Scroll through the list or use the filter to find the application.
  3. Select the Edit icon on the application row.
  4. Select Next until you reach the Crawler & Permissions Collection settings page.

    Note

    The entry fields vary by application type.

  5. Select a central permission collection service from the dropdown list. You can create permissions collection services as part of the service installation process.

  6. Under Analyze Files with Unique Permissions, the application type should be "OneDrive".
  7. Choose if you want to skip identity synchronization before running permission collection tasks when the identity collector is common to different connector. This option is checked by default.

You can now schedule a task.

Scheduling a Task

To create a schedule:

  1. Select Create a Schedule.
  2. The system will provide a Schedule Task Name in the format {appName} - {type} Scheduler. Choose to keep or override this suggestion.
  3. Select a scheduling frequency from the dropdown list.

    Schedule Frequency Options
    • Run After - Create dependency of tasks. The task starts running only upon successful completion of the first task.
    • Hourly - Set the start time.
    • Daily - Set the start date and time.
    • Weekly - Set the day(s) of the week on which to run.
    • Monthly - Set the day of the month on which to run a task.
    • Quarterly - Set a monthly schedule with an interval of 3 months.
    • Half Yearly - Set a monthly schedule with an interval of 6 months.
    • Yearly - Set a monthly schedule with an interval of 12 months.
  4. Fill the Date and Time field with scheduling times. These fields differ depending upon the scheduling frequency selected.

  5. Select the Active checkbox to activate the schedule.
  6. Select Next.

Configuring and Scheduling the Crawler

To set or edit the Crawler configuration and scheduling:

  1. Go to Admin > Applications.
  2. Scroll through the list or use the filter to find the application.
  3. Select the Edit icon on the application row.
  4. Select Next until you reach the Crawler & Permissions Collection settings page.

    Note

    The entry fields vary by application type.

  5. In the Calculate Resource Size field, determine when, or at what frequency, Data Access Security calculates the resources' size:

    • Never
    • Always
    • Second crawl and on (default)
  6. Schedule a task.

  7. Set the Crawl Scope by:

Including and Excluding Paths by List

To set the paths to include or exclude in the crawl process for an application:

  1. Go to Admin > Applications.
  2. Scroll through the list or use the filter to find the application.
  3. Select the Edit icon on the application row.
  4. Select Next until you reach the Crawler & Permissions Collection settings page.

    Note

    The entry fields vary by application type.

  5. Scroll down to the Crawl configuration settings.

  6. Select Advanced Crawl Scope Configuration to open the scope configuration panel.
  7. Select Include / Exclude Resources to open the input fields.
  8. To add a resource to a list, enter the full path to include or exclude in the top field and select + to add it to the list.
  9. To remove a resource from a list, find the resource from the list, and select the x icon on the resource row.

Note

When creating exclusion lists, excludes take precedence over includes.

Excluding Paths by Regex

To set filters of paths to exclude in the crawl process for an application using regex:

  1. Go to Admin > Applications.
  2. Scroll through the list or use the filter to find the application.
  3. Select the Edit icon on the application row.
  4. Select Next until you reach the Crawler & Permissions Collection settings page.

    Note

    The entry fields vary by application type.

  5. Select Exclude Paths by Regex to open the configuration panel.

  6. Enter the paths to exclude by regex. Since the system does not collect Business Resources that match this regex, it also does not analyze them for permissions.

Crawler Regex Exclusion Examples

Exclude all drives which start with one or more user names:

  • Exclude drives starting with John.Doe: ^Personal\/John\.Doe@.*

  • Exclude drives starting with John.Doe or Jane.Doe: ^Personal\/(John|Jane)\.Doe@.*

Include ONLY drives which start with one or more user names:

  • Include only drives starting with John.Doe: ^(?!Personal\/John\.Doe@.*).* - Include only drives starting with John.Doe or Jane.Doe: ^(?!Personal\/(John|Jane)\.Doe@.*).*

Narrow down the selection:

  • Include only the C$ drive shares: \server_name\C$: ^(?!\\\\server_name\\*C*\$($|\\.*)).*

  • Include only one folder under a share: \server\share\folderA: ^(?!\\\\server_name\\share\$($|\\*folderA*$|\\*folderA*\\.*)).*

  • Include only administrative shares: ^(?!\\\\server_name\\[a-zA-Z]\$($|)).*

Notes

  • To use a backslash or $ sign, add a backslash before it as an escape character.

  • To add a condition in a single command, use a pipe character |.

Excluding Top-Level Resources

Use the top-level exclusion screen to select top-level roots to exclude from the crawl. This setting is done per application.

To exclude top-level resources from the crawl process:

  1. Go to Admin > Applications.
  2. Find the application to configure and select the dropdown list menu on the application line. Select Exclude Top Level Resources to open the configuration panel.
  3. Select the Run Task button to trigger a task that runs a short detection scan to detect the current top-level resources. If the top-level resource list has changed in the application while you are on this screen, select the Run Task button to retrieve the updated structure.
  4. Once triggered, you can view the task status in Settings > Task Management > Tasks, depending on your access to the task page.
  5. When the task has completed, select Refresh to update the page with the list of top-level resources.
  6. Select the top-level resource list and choose top-level resources to exclude.
  7. Select Save to save the change.
  8. To refresh the list of top-level resources, run the task again. Running the task will not clear the list of top-level resources to exclude.

Special Considerations for Long File Paths in Crawl

Data Access Security uses a hashing mechanism to create a unique identifier for each business resource stored in the Data Access Security database.

If you are using a SQL Server database version from 2014 or earlier, you may see the following error message in the Permission Collection Engine log file: System.Data.SqlClient.SqlException (0x80131904): String or binary data would be truncated. because the hashing mechanism in those SQLServer versions cannot process (hash) values with 4,000 or more characters.

If you need to support file paths above 4,000 characters for the crawl:

  1. Access the Permission Collection Engine App.config file in your %SailPoint_Home%\FileAccessManager\[Permission Collection instance] folder
  2. Search for excludeVeryLongResourcePaths in the Permission Collection Engine App.config file and set it to true.

By default this value will be commented out and set to false.

This key ensures, when enabled, that paths longer than 4,000 characters are excluded from the applications’ resource discovery (Crawl) to avoid issues while storing them in the SQLServer database. Business resources with full paths longer than 4,000 characters, and everything included in the hierarchical structure below them, will be excluded from the crawl, and will not be collected by Data Access Security. This scenario is extremely rare.

Note

You should not enable exclusion of long paths unless you experience an issue.

Comments