Return to flip book view

Zantaz Ai Smart Data | Terabyte Tester

Page 1

Message

Page 2

Table of Contents I Terabyte Tester Program Welcome ………………………………….………….……………..…………………. 3 Sample Daily Workflow …..……….………………….…………………….…. 5 Phase 1: Data Selection and Configuration …………….……….…. 6 Phase 2: Scanning and Analysis ……………………………..………….. 11 Phase 3: Reporting and Review …………………………………..……. 23 Phase 4: Closing the Terabyte Tester .………………………………. 24 Frequently Asked Questions ……………………………….……………. 25 Page 2 Zantaz Terabyte Tester Program

Page 3

Welcome I Terabyte Tester Program Welcome to the Zantaz Ai Smart Data Processing Terabyte Tester Program. It is a rapid and innovative one-week to two-week experiential approach, introducing you to the world’s most advanced and comprehensive solution for transforming unstructured data into actionable smart data. During the Tester Program, you will analyze up to 1 Terabyte of your data, staged and copied to a secure location, with surface (metadata) and deep (content) scanning to: ● Find and tag redundant, obsolete, or trivial (ROT) data. ● Clarify data ownership and custodial responsibilities. ● Uncover hidden risks and vulnerabilities. ● Identify opportunities for cost savings and return on investment. The Terabyte Tester Program is delivered via a secure PremCloud Hosting SaaS tenant that will be set up just for you. PremCloud Hosting staff will be with you every step of the way to help guide you toward success. From selecting and staging data for analysis to directing you through scanning and processing, to reviewing, reporting, and planning for desired actions, an expert will always be at your side. The following generalized Terabyte Tester Program steps will help you achieve success with Zantaz Ai Smart Data Processing: 1. Data Set for Analysis Identify a data set you want to analyze with the Zantaz Ai Smart Data Processing solution. This should be a sample of your generated unstructured data (up to 1 Terabyte) copied into a secure and stable location (separate from your mainstream production data stores). We will review the potential data set with you to help you select data representative of your data estate. Staging the data for this process separately will allow you to evaluate the security and privacy of the program, as well as test solution functionality before using it in production. 2. Accounts and Remote Access Ai Smart Data Processing will require a service account to process your data. Ai Smart Data Processing will establish a secure remote access connection between your data set repository and the Terabyte Tester SaaS tenant. Connections and access will be specific to each single-tenant customer environment. 3. Firewall Rules and Settings Specific ports, protocols, and resolvable names or IP addresses may be needed to enable secure connections to your data set from the Terabyte Tester SaaS tenant. 4. Process Time While each data set and connection condition will vary, a Terabyte Tester Program, from onboarding to scanning, analysis, reporting, and review, is generally expected to take about 1-2 weeks. A more detailed example of a work plan can be found in the “Sample Daily Workflow of the Terabyte Tester Program” section. Page 3 Zantaz Terabyte Tester Program

Page 4

5. Support During the Terabyte Tester Program, you will receive support through direct contact with your Account Manager and a Program Technical Resource engineer. 6. Post Terabyte Tester The Terabyte Tester SaaS tenant can be transitioned to a production-level Zantaz Ai Smart Data Processing SaaS deployment, where it will be expanded and configured to process other datasets and repositories in your environment. Alternatively, Zantaz Ai Smart Data Processing can be deployed in your tenancy or on-premise, and architected to your environment's needs. 7. What’s next? Let’s get started. The following more detailed program tasks will guide you to success. You are ready to begin once you have staged the data and granted connectivity and access rights. Zantaz Ai Smart Data Processing is intuitive and easy to use, requiring no prior complex training. After completing the Terabyte Tester Program, you will have [1] all of the information necessary to leverage the benefits Smart Data can bring to your organization, [2] a strong awareness of the ROI you can drive as part of the process, and [3] valuable insight into how you can quickly define rules to apply tags and collections to enable the power of actionable smart data. To transition from the Terabyte Tester Program to a production deployment, consider the scale and timeline, particularly the amount of data you wish to convert to Smart Data and the desired start date. Page 4 Zantaz Terabyte Tester Program Terabyte Tester Stage Duration Activities Phase 1 Define, Connect, Establish 1-2 days Data Set defined, secure connections established, baseline set. Phase 2 Scan, Identify, Analyze 2-4 days Metadata, Deep, and Ai Analyses. Results, Tags, and Collections Phase 3 Report, Review 1-3 days Data types, owner/custodial, ROT, analysis, and review Phase 4 Conclusions, Closure 1-2 days Wrap up, clean up, and Completion

Page 5

Sample Daily Workflow I Terabyte Tester Program Activity Task Phase 1: Kickoff Meeting Day 1 Review purpose, discuss process & methodology Review requirements Set goals & success criteria Identify data sets, staging locations & connectivity options Questions & Answers Set Requirements Meeting timing Requirements Meeting Day 1 Review data content sets and finalize Review data staging options and location Review connectivity options, timeline, and duration Establish a secure web login to the solution Review basic configuration parameters Questions & Answers Set Readiness Check timing Readiness Check Day 2 Verify data set staging, location, and access rights Verify connection to the data set for the Terabyte Tester Set Terabyte Tester Commencement date Confirm all goals & success criteria for the Terabyte Tester Phase 2: Commencement Day 3, 4 Perform Initial metadata Surface Scan(s) Review initial metadata scan results Initiate Deep Scan of content Perform intended Identification Scans Apply Tags, Classify Create and populate Collections Phase 3: Report and Review Day 5 Review data insights from Scans, identify ROT Review ROT results for possible cost savings Review Tags and Collections Review reports and results Discuss the potential for actions from scans, analysis, classification, and ROT Phase 4: Conclusion and Closing Day 5 Final review and recommendations Potential follow-up actions and next steps Clean up of Terabyte Tester SaaS Any desired data set cleanup from the reference repository Final cleanup and removal of connections Page 5 Zantaz Terabyte Tester Program

Page 6

Phase 1 I Data Selection and Configuration Phase 1 of the Terabyte Tester program focuses on selecting and staging your data for scanning and securing connections between the Zantaz Ai Smart Data Processing SaaS environment and your environment. Data Selection and Staging You will work with Zantaz Ai Smart Data Processing to scan and analyze up to one terabyte of your data. You can select any combination of unstructured or semi-structured data types, including PSTs, PDFs, config files, CSVs, documents, spreadsheets, log files, and more. ● You select up to 1 TB of total data to be analyzed. Data should not be encrypted or password-protected to facilitate thorough analysis. The sample data should be unstructured and representative of the data generated by your corporate users from typical business applications and daily usage. If you have questions about data selection, please consult with your PremCloud Hosting Technical Contact. ● Copy the files to a location that can be accessed remotely (as described in the next section) and is isolated from mainstream end-user access and usage. Selecting data from diverse data silos can sometimes provide better insight into the types of data that users are creating, storing, and accessing, thereby providing a more comprehensive understanding for future planning. ● The files should be accessible via a UNC path for scanning. To more closely mimic the broader storage landscape within your organization, a separate share should be created for each copied data set. IMPORTANT – For the best experience when copying your files into your staging location, use a method that preserves file attributes, such as the created and last modified dates, creator/owner SIDs, and other relevant information. This will enable you to thoroughly explore the features and functions of the Zantaz Ai Smart Data Processing tools, which can classify information based on age, ownership, and other metadata. While many storage administrators may have their preferences based on past experiences, some of the more common options are Robocopy, RichCopy, Rsync, and Emcopy. If you are uncertain about how to proceed, your assigned PremCloud Hosting Terabyte Tester delivery expert will be happy to help. On the next page is a brief questionnaire that you can review with your business or data ownership team to help prepare and correctly stage data for use in the Terabyte Tester Program. This information can be filled out and provided ahead of time or reviewed in detail during the initial sessions with your PremCloud Hosting technician. Page 6 Zantaz Terabyte Tester Program

Page 7

Page 7 Zantaz Terabyte Tester Program Terabyte Tester Data Staging Worksheet Client Details Company Contact Person Contact Email Contact Phone Engagement Details Start Date Estimated Duration 1 Business Week Target Data Size 1 Terabyte Data Staging Checklist Is the Staged Data a representative sample of unstructured Corporate Data? Y: ☐ N: ☐ Does the Staged Data include a mixture of both active and inactive data? Y: ☐ N: ☐ Does the Staged Data consist largely of user-generated, flat files? Y: ☐ N: ☐ Was the Staged Data copied from multiple sources (1-4 locations)? Y: ☐ N: ☐ Does the Staged Data span a variety of file types and sizes? Y: ☐ N: ☐ Is the Staged Data expected to have low sensitivity? Y: ☐ N: ☐ Data Access Requirements Is the Staged Data accessible via a UNC (Windows Share) path? Y: ☐ N: ☐ Is the Staged Data broken into multiple (Per-Source) Shares? Y: ☐ N: ☐ Is the account granted “Read” permissions (Share & NTFS) for Zantaz Ai Smart Data Processing? Y: ☐ N: ☐ Is the account granted “Full” (Read & Write) permissions (Share & NTFS) for Storage Optimization Processing? Y: ☐ N: ☐ Is the account granted “Full” (Read & Write) permissions (Share & NTFS) for ROT? Y: ☐ N: ☐ Ai Smart Data Processing Targets File Share Details (Read-Only) Staged File Server IP Address Share Name Service Account

Page 8

Remote Access Configuration For the Terabyte Tester Program, Zantaz Ai Smart Data Processing is made available by PremCloud Hosting as a secure SaaS solution. Your data set will remain in your environment. For the duration of the Terabyte Tester program, a secure and encrypted communication tunnel must be established to scan the data. ● PremCloud Hosting staff will work with you to establish the connection using the mechanism of your choice (e.g., VPN, IPSec tunnel). ● Your files will neither be stored in the Zantaz Ai Smart Data Processing SaaS solution nor anywhere in PremCloud’s data center. The secure connection will be used to: o Scan files in place. o Collect metadata and attribute information. o Develop an index of the files for analysis in the Zantaz Data Optimizer web interface. ● Traffic flowing between Zantaz Ai Smart Data Processing and your environment is encrypted in transit, and index and metadata are encrypted at rest. ● All index information, configuration, and metadata are securely deleted from the Zantaz Ai Smart Data Processing SaaS at the end of your Terabyte Tester Program period. If you choose to migrate the results of Terabyte Tester to a production deployment, appropriate backup and restoration steps and processes will be taken to bring that information into a production deployment. Page 8 Zantaz Terabyte Tester Program File Share Details (Full Access: Read & Write) Staged File Server IP Address Share Name Service Account File Share Data Details Share Name Data Size (GB) File Count (000’s)

Page 9

Ports and Protocols To experience the Zantaz Ai Smart Data Processing SaaS solution and to leverage its ability to process and analyze your data, the following ports and access are required: ● Port 443- HTTPS secure web interface access to Zantaz Data Optimizer (HTTP access [port 80] is not allowed). Modern browsers (Chrome, Edge, Firefox) are supported. ● Port 139/445 - For access to a Windows-based UNC share of your data Remote Access Account To ensure site-to-site connections are authenticated to a specific service account, you must supply the credentials for the service account that can connect and scan the location where the files are stored. ● This service account must be able to connect to the data storage location via your VPN or preferred mechanism. ● This service account should only have permission to the file share where the test data is located. ● Multiple service accounts may be configured to support different access levels to staged data shares. ● Please provide the account name and password at the point of connection when the test period begins. Zantaz Ai Smart Data Processing Web Interface The Zantaz Ai Smart Data Processing web UI will serve as the configuration and administration interface for the duration of the Terabyte Tester. Local authentication to the web UI will provide the universal compatibility required while avoiding the often burdensome approval process necessary for integrating with corporate directory services systems that wouldn’t be worthwhile for a weeklong engagement. Access to the web UI will be provided during the setup and connection configuration period. o A URL will be provided to the Zantaz Ai Smart Data Processing web-based UI. o Login credentials will be provided at the time of the connection. On the next page is a brief questionnaire that you can review with your Networking & Information Security team to help prepare the necessary connectivity and secure remote access for scanning and analyzing the data you selected for the Terabyte Tester Program. This information can be completed in advance and provided via the PremCloud secure file transfer solution, or it can be reviewed in detail during the initial sessions with your PremCloud Hosting technician. Page 9 Zantaz Terabyte Tester Program

Page 10

Terabyte Tester VPN Gateway Information Client Details Company Contact Person Contact Email Contact Phone Client VPN Gateway Details VPN Gateway Manufacturer (Palo Alto, Fortinet, Cisco, etc.) VPN Gateway IP Address Commission Date Decommission Date PremCloud Hosting Gateway Details VPN Gateway Manufacturer VPN Gateway IP Address IKE (Phase 1) Parameters IKE Version (e.g., V1, V2, etc.) IKEv2 Key Exchange Encryption Algorithm (e.g., 3DES, AES-256, etc.) AES-256 Authentication / Data Integrity Hash Algorithm (SHA1, MD5, SHA256, SHA384, SHA512) SHA384 Diffie-Hellman Group for IKE SA (1, 2, 5, etc.) 20 Authentication Method (Pre-Shared Key, Certificates, RSA, PSK) PSK Lifetime of IKE SA (in seconds, for example, 86,400) 86400 IPSEC (Phase 2) Parameters ESP Transform Encryption Algorithm (e.g., 3DES, AES-256, etc.) AES-256 Authentication / Data Integrity Hash Algorithm (SHA1, MD5, SHA256, SHA384, SHA512) SHA384 Diffie-Hellman Group for Perfect Forward Security (if PFS is used) 20 Lifetime of IPSEC SA (in seconds, for example, 28,800) 28800 Protected Networks (Encryption Domains) PremCloud Hosting Networks IP Hosts IPv4 Address (Remote) Ports Required Client Network IP Hosts IPv4 Address (Remote) Ports Required Terabyte Tester Staged File Server #1 Terabyte Tester Staged File Server #2 Terabyte Tester Directory Services Server #1 Terabyte Tester Directory Services Server #1 Page 10 Zantaz Terabyte Tester Program

Page 11

Phase 2 I Scanning and Analysis Phase 2 is where the magic happens. Working with your PremCloud Hosting resource, you will leverage various Zantaz Ai Smart Data Processing functions to gain tremendous insight into and enrich your data. These will be performed in three (3) main steps: 1. Scanning - Surface, Deep, Analysis, and Tagging 2. Reporting Review - Topical and content reports, classification, review 3. Actions Roadmap - Potential actions to take, ROT removal, retention, disposition Scan and Analysis Typically, three levels of scanning will be performed. Two levels are specifically for data identification and processing. A third level entails scanning for specific conditions and applying tags and classification as required. ● Surface Scan A Surface Scan (“metadata”) examines files to collect basic information, including file names, types, sizes, owner/custodian, location, and date and time stamps. ● Deep Scan Following a surface scan, a Deep Scan will perform a comprehensive content scrape and indexing to develop and enable full content search capabilities for the scanned data and files. ● Analysis / Classification Analysis scans apply tags to classify attributes, such as redundant, obsolete, or trivial (ROT) data, and interrogate for PII, PCI, HIPAA, and other sensitive regulatory data. Tags applied based on the analysis performed will enable further review, classification, or decisions about remediation for cost savings, and for assignment to Collections for potential additional actions. Page 11 Zantaz Terabyte Tester Program

Page 12

Surface Scan The following quick reference workflow will enable a rapid path to a successful Surface Scan. This assumes you are logged into the Zantaz Ai Smart Data Processing UI. Additional screen views and panels will be available in the solution documentation. Performing a Surface Scan 1. Click the “Hamburger” icon on the left of the Zantaz Ai Smart Data Processing UI to open the menu. Select “Scans,” then click “Create a new scan.” 2. Configure the Scan by completing the step-by-step wizard. a. Select the Connector that will perform the scan. b. Select the Default Credentials option to scan using the account configured for the selected Connector, or specify an alternative account that has read access to the target location (the Connector will test the credentials as you enter them). Page 12 Zantaz Terabyte Tester Program

Page 13

c. Provide the path for the location to be scanned (a UNC path works best; the Connector will verify the path as you enter it.) d. Optionally, check the box to enable a Hash scan to help identify potential duplicate items to be performed automatically when the Surface Scan completes (if not selected, the Hash scan will be performed separately, for example, as part of a later ROT analysis). e. Provide some information about your cost basis for the storage being scanned to enable real-time calculations of potential savings. Page 13 Zantaz Terabyte Tester Program

Page 14

f. Finally, provide a unique name and description for the Scan for later reference, and click the Create Scan button. 3. Use the Scan screen to review, delete, start, or stop your configured scans. The scan process will be updated in real-time, and an indicator will appear when the scan is complete. 4. When the surface scan is complete, select the triple dots to the right of the selected scan name and select “Solo in Unified Data Optic” to review the initial scan results. Page 14 Zantaz Terabyte Tester Program

Page 15

5. From the Data Optic view, you can explore data by timeline, file listing, identified duplicates, and various data sorting and filtering panels, and initiate additional scan types to gain an even deeper understanding and control over the data. Reviewing the Scan (Explorer Menu) 1. Data Filtering Select additional scan(s) to add to the review and/or apply filters to target specific data for analysis. Filter based on any combination of metadata, content (as available), applied tags, and more. Filters can be saved for later re-use. 2. Overview Provides graphical views of data by age, file type, and certain tag types (when applied). Page 15 Zantaz Terabyte Tester Program

Page 16

3. Files Browse the list of files processed during the scan or that match the specified filter. You can view metadata about the files, a preview of the content (if available), and apply tags, as well as add files to collections. 4. Duplicates Will list available duplicate files found, based on hash algorithms that can be computed with the Surface Scan or during an analysis such as ROT classification. 5. Ownership Provides a breakdown of the users and groups that have permission to the scanned files. This option requires the ingestion of a lightweight directory import file (LDIF), which your PremCloud Hosting associate can help you create. Enrichment and Classification (Analyze Menu) 1. Deep Content Perform Deep Content scanning to create a comprehensive, searchable index of textual content within files. 2. Images Employ AI computer vision models to perform optical character recognition (OCR) to extract text from images, and to identify and describe image contents. 3. Classification Identify and tag redundant, obsolete, or trivial files based on user-specified rules, and detect and tag files containing sensitive data, such as PII, PCI, HIPAA, or NSFW content. Enrichment and Classification The next step after Surface Scans, the Deep Scan extracts and indexes file contents to enable duplicate detection and search, and classification analysis functions begin transforming unstructured data into actionable, smart data. A brief review of the menus for Enrichment and Classification will help you start the next level of analysis. Full views, panels, and descriptions are available in the solution documentation. Performing a Deep Scan Page 16 Zantaz Terabyte Tester Program

Page 17

1. When viewing data from a Surface Scan, select the “Analyse” menu and click “Deep Content.” 2. Provide a name and description for the Deep Content analysis, then click “Start Analysis.” 3. The Deep Scan analysis will run in the background, and its real-time status will always be available on the Data Intelligence screen (accessed using the hamburger menu at top left) a. Re-entering credentials is not needed here; Deep Scans, ROT Classification, and other analyses will use the encrypted credentials provided for the associated Surface Scans. Performing ROT Classification 1. From the same “Analyse” menu, select the “ROT Classification” option to launch a scan for Redundant, Obsolete, and Trivial files within the data set. Page 17 Zantaz Terabyte Tester Program

Page 18

2. Select the conditions for the ROT Classification analysis based on your rules for deduplication, date ranges, and so on. a. Tag the “Golden Copy” of duplicate files by date type (created, modified, or last accessed) and order (newest or oldest). b. Choose the date cutoff and date type to identify the Obsolescence criteria. c. Specify rules for Trivial data declaration by file type designation. d. Apply an “Analysis Name” and an “Analysis Description, “then click “Start Analysis” to initiate. e. If not already completed, a “Hash” scan will automatically be performed to identify duplicate files before processing the ROT classification. Page 18 Zantaz Terabyte Tester Program

Page 19

3. To review the results of the “Enrichment” and “Classification” scans, return to the “Explore” section of the Data Optic when the scan is complete. For example, click on the “Duplicates” option of the “Explore” menu to see that duplicate files have been detected and the “golden copy” has been tagged by the ROT Classification analysis. 4. Tags such as “Golden Copy” or “Redundant” applied by the classification scans are visible when viewing the “Files” section of the “Explore” menu in the Data Optic: Page 19 Zantaz Terabyte Tester Program

Page 20

Performing Sensitive Data Classification 1. From the same “Analyse” menu, select the “Sensitive Data” classification scan option. 2. Review and select the appropriate regulations from the available list (the most commonly used regulations are at the top) to determine the rules to apply for tagging data based on its sensitivity. 3. Drop down the “Customise” section to fine-tune your selections from among dozens of types of sensitive information to seek. Page 20 Zantaz Terabyte Tester Program

Page 21

4. Apply an “Analysis Name” and “Analysis Description,” then click “Start Analysis” to process the scan. 5. Results will be available in the “Explore” section of Data Optic, where you can view and click through various graphical representations of the data, including applied tags. Page 21 Zantaz Terabyte Tester Program

Page 22

6. From the “Explore” menu, select the “Files” option to view file-level information about completed scan types and applied classification tags such as Redundant, Obsolete, Trivial, NSFW, and more. Reviewing, Creating, and Applying Tags 1. From the “Hamburger” icon on the left, select “Tags.” 2. Tags are categorized into two types: System (pre-defined) and Custom (created by users). The list includes the count of any files to which the tag has been applied, either manually by the user or as a result of classification scans, as described earlier. 3. Click the desired Tag(s) and then select “Add to current filters” to filter the list in the Unified Data Optic (for example, on the “Files” or “Duplicates” screens) based on the Tags. Page 22 Zantaz Terabyte Tester Program

Page 23

4. System Tags are applied by analysis jobs (e.g., ROT, Sensitive Data Classification). Any Tag can also be applied manually to a single file, a selected group of files, or all files shown in the current Unified Data Optic set. a. For example, to apply a Tag to a single file, select it in the “Files” list in the Unified Data Optic and then select the “Tags” option in the sidebar. Type the first few letters of the desired tag to see a list of tags containing those letters, then click the desired Tag and click ‘Add Tag.’ The same method can be used to add the document to a Collection. 5. To apply tag(s) to multiple files, use the process described below for “Collections,” but select “Tags” instead. Creating a File Collection A “Collection” is a sorted and filtered data set from classification scans or manual declarations, tags, and assignments. Collections can include files from multiple scans. 1. Click the “Collections” menu option to review or create a collection. Page 23 Zantaz Terabyte Tester Program

Page 24

2. Collections will be listed with the count of files (if any) each includes and the number of scan(s) from which the files originated. Click “Create new collection” to customize your own: a. Enter a “Name” b. Enter a “Description” c. Click “Create a new collection.” 3. To add a single file to a Collection, follow the process outlined above for Tags but select Collections instead. 4. Adding multiple files to a Collection can be done in multiple ways. a. To add a specific group of selected files to a Collection, check the box(es) next to the desired file(s) in the Unified Data Optic file list (under the Explorer > Files menu), and then click the “Actions” drop-down at the top-right corner of the file list: b. To add all files from the current data set shown in the Unified Data Optic to a Collection, click the “Data Actions” drop-down at the top-right corner of the Unified Data Optic screen: 5. Select the desired Collection name to add the files. The same process can be used to add Tags to the selected files or all files in the current dataset. 6. When returning to the “Collection” screen, the count of files is updated to show the results. Selecting an individual file in the Explorer > Files screen of the Unified Data Optic will also show the Collection(s) to which it has been added (similar to viewing applied Tags on a file.) Page 24 Zantaz Terabyte Tester Program

Page 25

Smart Data Routing The Smart Data Organizer provides several options for taking action on files that have been added to a Collection. These include copying or moving to a new location, or deleting unnecessary files to realize storage savings. 1. Click the “hamburger” icon and select the Smart Data Organizer menu option. Click the Create Workflow link to begin defining a new router workflow activity. 2. Select the workflow action to perform: New Copy Action, New Move Action, or New Delete Action. 3. For any router action, select the Collection of files to be acted upon. Enter a few characters to see a list of Collections to select from that include the entered characters. 4. Move and copy workflows require the destination to be specified. This includes selecting the Connector that has access to the destination location, specifying the account (if different from the default account configured by the destination Connector), and the destination file path. Page 25 Zantaz Terabyte Tester Program

Page 26

5. Specify the number of files to act upon in parallel for best performance (default is 2). 6. Additional options for a Move or Copy workflow include the number of times to retry a failed action (default is 5), whether to retain the same folder structure in the destination as in the source and/or create a new root directory in the destination path, and the character to substitute for any invalid characters in the output file or path name. Page 26 Zantaz Terabyte Tester Program

Page 27

7. Finally, enter a name and description for the router workflow action. The current status of all workflows is listed on the Smart Data Organizer screen and can be expanded to display complete workflow action details. Phase 3 I Reporting and Review Page 27 Zantaz Terabyte Tester Program

Page 28

Zantaz Ai Smart Data Processing will generate a report summarizing the results from the data scanning and analysis performed during the Terabyte Tester engagement. This reporting will also include insights into the data’s classification, risk profile, and storage optimization opportunities available: Phase 4 I Closing the Terabyte Tester Trial Program Page 28 Zantaz Terabyte Tester Program

Page 29

Upon completing the Terabyte Tester Program, your original sample data set in the UNC path(s) of your environment is left intact. All metadata, database records, index content, tags, collection identifiers, and any report data will be entirely wiped from the Terabyte Tester tenant, and the tenant will be reset. No remnants of the Tester Program will be left behind. To wipe the environment, the following actions will be taken: ● Removal of all indexing content, all search results, and any search queries generated ● Metadata and database records that were generated and stored in the Terabyte Tester database structures will be deleted and destroyed ● All reporting results and any collection details, and comparison data will be purged ● Any tags, analyses, results files, exported source data, and all other data from the duration of the Terabyte Tester program will be deleted and wiped from the tenant used after your program time. Please refer to your Account Manager if a more definitive data removal and wiping statement is needed. Support During the Terabyte Tester Program During the Terabyte Tester Program period, your primary interaction for support will be with your Account Manager and an assigned PremCloud Hosting Technical Contact. Support Portal The PremCloud Support team remains your primary point of contact for all support matters outside the Terabyte Tester program. This will include support for all Production deployments of the Zantaz Ai Smart Data Processing solution. Page 29 Zantaz Terabyte Tester Program

Page 30

Frequently Asked Questions I Terabyte Tester Program Below are answers to some frequently asked questions about the Terabyte Tester program. General Questions 1. How Long Does the Test Period Last? Once the prerequisites are established, the testing process will generally last about 1-2 business weeks. This provides ample time for us to walk you through meeting the goals established at the outset of the engagement. These will include scanning the staged target data, performing the desired ROT analysis, and applying tags and collections for potential cost savings. We will also show you how to index, classify, and enrich any subset of the data you wish to gain greater insight into before determining the appropriate action. Having successfully met the objectives and explored a couple of fundamental use cases at the end of the engagement, we will fully decommission your instance, destroying any metadata or processed insight in the process. 2. How will I access the Terabyte Tester SaaS? A private URL will grant secure, authenticated access to your single-tenant instance. 3. Do I Need to Create Accounts? During the initial configuration, PremCloud Hosting will help you set up your Zantaz Ai Smart Data Processing instance, establish a secure VPN tunnel, and define the required accounts for scanning and analyzing your data in your designated staging area. 4. What Does ROT Mean? ROT is an acronym that stands for Redundant, Obsolete, and Trivial. This refers to files that, after analysis, are likely to provide cost savings by being deleted or moved to lower-tier storage because they are old or unnecessary. Security Questions 5. Are My Files Moved or Copied? No, your files are not moved or copied. Scanning the files collects only metadata and indexing information for analysis. The files are never stored in the PremCloud Hosting data center. 6. How Does Zantaz Ai Smart Data Processing Access My Data? A secure, point-to-point VPN tunnel is required for Zantaz Ai Smart Data Processing to access the files in your staging area. 7. Is Zantaz Ai Smart Data Processing Multi-Tenant? No. Each Zantaz Ai Smart Data Processing Terabyte Tester Program instance is discretely deployed with its own set of component pods isolated from other deployments, has its private URL, and uses unique logins. Page 30 Zantaz Terabyte Tester Program

Page 31

8. Who Else Can Access My Index Data? Only authorized persons from your organization whom you identify and the PremCloud Hosting team members working with you during the test period will have access to your Zantaz Ai Smart Data Processing environment. 9. Is the Index Data Encrypted While in Motion / at Rest? Scanning is performed over a secure VPN tunnel, with the data encrypted in transit. Index data for analyzing the scanned files is also encrypted while at rest. Data Staging Questions 10. Do I Need to Supply Exactly a Terabyte? No. For the Terabyte Tester Program, you can supply up to one terabyte of your data in a secure location for analysis. You may supply as much or as little data as desired up to that limit. Your PremCloud Hosting Technical contact can assist in identifying ideal data sets you may wish to use. 11. What Kind of Files Can Be Used? For best results, we recommend a variety of file types. We recommend a mix of unstructured data representative of your data as a whole. This might include, but may not be limited to, PSTs, PDFs, config files, CSVs, spreadsheets, documents, log files, and so on. 12. Do I Need to Move the Data to a Staging Location? We do recommend copying the test data to a staging location in a DMZ or testing tier, where your network and information security team can quickly secure and silo the data while limiting the Zantaz Ai Smart Data Processing’s scanning access to just the staged data location(s). This allows you to explore all the functionality of Zantaz Ai Smart Data Processing without affecting any other data in your environment. 13. What is the Difference Between a Surface Scan and a Deep Scan? A Surface Scan, also called a metadata scan, examines file properties such as file name, type, size, creation and modification dates, etc. A Deep Scan examines and indexes file contents, including the contents of embedded files (those found inside ZIP or other archive types, attachments, etc.) A typical workflow includes using the results of the surface scan to target specific data for a deep scan. 14. When and How is the Index Data Removed? Upon concluding your Terabyte Test Program engagement, the underpinning virtual infrastructure resources (including all the associated Kubernetes deployments, pods, services, daemon sets, and stateful sets) up to and including the Kubernetes namespace for that deployment will be destroyed. No index information, metadata, or other data is retained. Page 31 Zantaz Terabyte Tester Program