Data Architecture and Management Designer

¡Supera tus tareas y exámenes ahora con Quizwiz!

When is a query deemed selective?

A query is deemed selective when one of the query filters is on an indexed field, and the query filter reduces the resulting number of rows below a system-defined threshold. The performance of a SOQL query drastically improves when two or more filters used in the WHERE clause meet the mentioned conditions.

How is a self-relationship created, and what can it be used for?

A self-relationship can be created by creating a lookup field on the object which you are looking up to. It allows you to relate records of the same type together.

What is a skinny table?

A skinny table is a custom table on the Force.com platform that contains a subset of fields from a standard or custom base Salesforce object. By having narrower rows and less data to scan than the base Salesforce object, skinny tables allow Force.com to return more rows per database fetch, increasing throughput when reading from a large object. Skinny tables are most useful with tables containing millions of records, and can enhance performance for reports, list views, and SOQL.

When is a tier based data archiving and retention strategy the best course of action?

A tier based data archiving and retention strategy is the best course of action to structure and place data in a hierarchy based on business value and the duration for which it should be stored for various business operations. It can then be archived based on its position in the hierarchy, and the desired schedule.

Why should you have an appropriate data archiving strategy?

An appropriate data archiving strategy can be utilized to ensure performance and scalability. Archiving data at the same rate it comes into the system can prevent a negative impact on performance.

What are the four APIs that Salesforce provides in order to backup data and metadata?

REST API, SOAP API, Bulk API, and Metadata API

What does the Field Audit Trail allow you to do?

Field Audit Trail allows you to define a policy to retain archived field history data up to 10 years from the time the data was archived and for up to 60 fields per object. This feature can help you to comply with industry regulations.

What can Field Audit Trail be used for?

Field Audit Trail can be used to automatically store old and new values of fields for a policy defined for up to 10 years. You can track up to 60 fields per object.

What is the use of Field history Tracking?

Field History Tracking can be used to track and store field values in Salesforce. It allows for accessing the field history data up to 18 months via the UI and 24 months via the API. You can turn on Field History Tracking for a max of 20 fields on an object.

What can be enabled in order to allow creating reports based on field-level changes?

Field history tracking can be enabled in order to allow creating reports based on field-level changes. Dashboards can be created based on field history reports.

How is file and data storage calculated?

File and data storage are calculated asynchronously, so if you import or add a large number of records or files, the change in your org's storage isn't reflected immediately.

What should be reviewed with business users while defining a system of record when there are multiple enterprise systems and data integrations?

Data flows should be reviewed with business users.

What does data governance do?

Data governance establishes rules and policies to ensure reliable and effective customer data. These rules define processes and protocol to ensure usability, quality, and policy compliance of organizational data.

What is serial load?

Serial load refers to loading data serially so that 'insert' operations are performed one at a time. When using serial mode, the best degree of parallelism is 1, which is the baseline for the slowest performance. Serial load should be preferred for group membership operations like inserting users or group members.

What fields are indexed by default?

* Primary keys (ID, Name, and Owner) *Foreign Keys (lookup or M-D relationship fields) * Audit Dates (SystemModStamp) * Custom fields marked as External ID, or Unique

What are the four general master data domains?

1. Customers - within the customer domain, there are customer, employee and salesperson sub-domains 2. Products - within the products domain, there are product, part, store and asset sub-domains 3. Locations - within the locations domain, there are office location and geographic division sub-domains 4. Other - within the other domain, there are things like contract, warranty, and license sub-domains

What are the steps in developing a data governance framework?

1. Data Assessment - assessing the current state of the data 2. Developing a Governance Framework - includes data definition, quality standards, roles and ownership, security and permissions, and quality control 3. Implementation - includes standardization of fields, deduplication or records, data enrichment efforts, data cleansing, and monitoring the quality of data on an ongoing basis

What are the data governance roles and responsibilities?

1. Executive Team - the executive team can consist of upper mgt, and other senior members. Responsibilities include administration of the program and communication to other teams. 2. Strategic Team - Comprised of VPs, senior directors, the data architect, and responsibilities include making decisions and providing approval at the enterprise level. 3. Tactical Team - Comprised of directors, data domain stewards, and responsibilities include acting as the domain authority for managing data quality. 4. Operational Team - Comprised of managers, data stewards, individual contributors and responsibilities include defining the data and ensuring its integrity and quality.

What conditions will make a SOQL query automatically non-selective?

1. Negative filter operators 2. Comparison operators paired with text fields 3. Leading wildcards '%' 4. References to non-deterministic formula fields, such s cross object formula fields

What are the different types of data?

1. Unstructured Data - data found in emails, white paper, magazine articles, etc. 2. Transactional Data - Data about business events that have historical significance or are needed for analysis by other master data. Unlike master data, transactional data is inherently termporal and instantaneous in nature. 3. Metadata - Data about other data. It may reside in a formal respository or in various other forms like XML docs, report definitions, columns in a database table, etc 4. Hierarchichal Data - Data that stores the relationships between other data. It may be stored as a part of a accounting system or store actual corporate hierarchies 5. Reference data - A special type of master data used to categorize other data or used to relate data to information beyond the bounds of enterprise 6. Master Data - The core data within the enterprise that describes objects around which business is conducted. It typically changes infrequently, and can include reference data that is necessary to operate the business.

What types of things are supported on big objects?

A big object supports object and field permissions, however features such as triggers, flows, and processes are not supported.

What is a business dictionary and why is it needed?

A business dictionary is a repository of metadata that contains information about various aspects of the data model including fields and relationships among fields. You can utilize an Entity Relationship Diagram (ERD) and the Metadata API to build the data dictionary for the customization information in Salesforce. A business dictionary is needed to document how the business or data entities should be defined and related to one another in a particular system.

What does a custom report type support?

A custom report type support up to 4 objects and 60 field references. It is possible to create a report on an external object in Salesforce.

What does a data governance framework include?

A data governance framework includes elements such as data definitions, quality standards, roles and ownership, security and permissions, and quality control.

What can be used to enrich and cleanse data on an ongoing basis?

A feature like Lightning Data or Data.com Clean can be utilized to enrich and cleanse data on an ongoing basis. Data.com Clean allows comparing individual records side by side with accurate data from Data.com.

What is a hierarchy custom setting used for?

A hierarchy type of custom setting uses a built-in hierarchical logic that lets you personalize setting for specific profiles or users. The hierarchy logic checks the organization, profile, and user settings for the current user and returns the most specific, or lowest value.

What is a partial backup, and what are pros and cons?

A partial backup is a backup of a subset of data. Its pros are that it is efficient for retrieving a record from a subset of data, and is an ideal approach for archiving purposes. The cons are that it may lack related information.

What should be designed and submitted to Salesforce to conduct a performance test of an application?

A performance test plan should be designed and submitted to Salesforce to conduct a performance test of an application.

What do all entries in the Field History Tracking log include?

All entries in the Field History Tracking log include the date, time, nature of the change, and who made the change. Not all field types are available for historical trend reporting. You can track up to 20 fields per object, and the history can be accessed for up to 24 months via the API.

What is an entity relationship diagram?

An entity relationship diagram helps in visualizing the objects, fields, and relationships in a data model.

What can the Heroku Postgres database be used for?

An external Heroku Postgres database can be used to store information not in Salesforce, but that needs to occasionally be displayed in Salesforce. Heroku Connect and Heroku External Objects can b used in order to expose data stored in the Postgres database in Salesforce. The data can be accessed from a Heroku app and also synced with and displayed in Salesforce.

What is an incremental backup, and what are the pros and cons?

An incremental backup backs up differences since the last full backup. The data replication API is well suited for this backup. The pros are that it is efficient for retrieving a change that took place on a specific data, while the cons are that it may lack related info and take longer to rebuild a complete picture.

What is vertical optimization when performing a backup?

Backup time is among other parameters, proportional to the number of records that you are retrieving. Vertical optimization includes splitting your query into several queries to reduce the number of records included per query. This can help to bypass several incidents like timeouts and other limits.

What is horizontal optimization when performing a backup?

Backup time is, among other parameters, proportional to the number of fields which you are querying. Splitting horizontally the query into several queries in order to reduce your queried fields per query can bypass several incidents that will cause timeouts or other issues.

What is a Big Object?

Big Objects are a Salesforce feature to store billions of records in a read-only formal. You can archive data from other objects or bring massive datasets from outside systems into a big object to a full view of your customers. Clients and external systems use a standard set of APIs to access big objects.

What is Central Master Data Management?

Central Master Data Management is when you create records within the MDM and then distribute (syndicate) the centrally created data to remote systems.

What is consolidation?

Consolidation is used to identify duplicate records and merge them into one record.

What fields can you set by enabling the "Create Audit Fields" feature?

CreatedByID CreatedDate LastModifiedID LastModifiedDate

What are custom metadata types?

Custom metadata types are customizable, deployable, packageable, and upgradeable application metadata. First, you create a custom metadata type, which defines the form of the application metadata. Then you build reusable functionality that determines the behavior based on the metadata of that type. You can use SOQL to access your custom metadata types and to retrieve the API names of the records of those types.

What does data classification consist of?

Data classification can consist of multiple steps such as discovery, identification, categorization, and definition of security controls. Metadata can then be separated out and classified into different groups.

What artifacts should be included in the documentation related to the integration between two systems?

Data Models, Integration Specifications, and Data Lineage.

What is Data Taxonomy and why is it needed?

Data Taxonomy is classifying the data into categories and sub-categories while defining common terms and semantics for multiple enterprise systems. A hierarchy that is based on segregation into multiple categories can be created for a certain type of metadata, like a custom object. Data taxonomy is needed to add consistency to a particular type of data when it is used by multiple enterprise systems in an organization.

What are the two types of information that you can backup from Salesforce?

Data and metadata

What does data lineage include?

Data lineage includes aspects such as the origin of data, the operations that affect the data, and where the data can possibly move during the lifecycle.

What are key attributes of data quality?

Data quality has several key attributes such as age, completeness, accuracy, consistency, duplication, and usage.

In specific, what are data stewardship practices?

Data stewardship practices outline the structure of a quality control process, including frequency, scope, owners, and checks. It also includes methods for cleansing, de-duplicating, blocking duplicates, merging and adding to records. Also creates a policy for data retention and archival, supported by metrics that can be tracked using a simple dashboard.

What is data stewardship?

Data stewardship puts tactical roles and activities into effect to ensure adherence and support of the data governance plan. It includes assigning people to uphold the plan, and developing a strategy for monitoring and maintenance of customer data.

What is Data Stewardship?

Data stewardship refers to utilizing tactical roles and activities that support the data governance plan for monitoring and maintaining data on a daily basis.

What does defining data lineage involve?

Defining data lineage involves specifying the origin of data, how it is affected, and where it moves in the lifecycle.

What are the two matching techniques in which today's MDM apps use to detect duplicates?

Deterministic Matching and Probabilistic Matching

What is deterministic matching?

Deterministic Matching mainly looks for an exact match between two pieces of data. As many systems do not have a common unique identifier, the majority of implementations of MDM use several data elements like name, address, DOB, etc, to deterministically match the values separately, and then combining these elements, they come up with an overall match score.

What can using divisions do?

Divisions can segment organization's data into logical sections, making searches, reports and list views more meaningful to users.

What can Einstein Analytics dashboards be used for?

Einstein Analytics dashboards are interactive views of data that support creating ad hoc filters to answer questions. Einstein Analytics can be used to explore both Salesforce data and external data using interactive views. Both Salesforce and Mobile Devices can be used to access and drill into data. In addition, it can be used to visualize complex KPI's and self-service analytics, as well as year over year historical trends.

What is Einstein Analytics capabilities when it comes to large datasets?

Einstein Analytics has a large dataset capability, which supports hundreds of millions of rows, as well as quick querying and response times.

What can be used to build a data dictionary?

Entity Relationship Diagrams (ERDs) and the Metadata API can be used to build a data dictionary.

What does Event Monitoring provide tracking of?

Event Monitoring provides tracking for these types of events: *Logins *Logouts *URI(web clicks SF Classic) *Lightning (web clicks, performance, errors in LEX and SF Mobile App) *VF Page loags *API calls *Apex executions *Report exports

How can you assess your data for usage?

Review the available tools and resources your business uses. Are you optimizing your data use?

What are false positives?

False Positives are matching cases where two records are linked because they were found to match, when in fact, they represent two different entities.

What are false negatives?

False negatives are matching cases where two records are not linked because they were not found to match, when in fact they represent the same entity.

What represents the most trusted data source in a MDM Solution?

Golden Source of Truth, Single Source of Truth (SSOT).

What is Harmonization?

Harmonization is the process of pushing the new cleansed data back out to your partner systems.

In order to improve report performance, what should one do?

In order to improve report performance, one should: *Use efficient filters *Remove unnecessary columns from the report *Hide detail rows *Write efficient formulas *Bucket data sparingly *Simplify sharing rules/permissions *Hard delete records *Run slow reports during off-peak hours

What are important questions when considering the creation of a data archiving/purging plan?

Important questions when considering the creation of a data archiving plan are: 1. Is it necessary to create reports on the custom objects? If so, can the reports contain summarized information rather than actual records? 2. Will the archived data be needed by users for reference or reporting? If so, what fields are required? 3. Will the purged data be needed by any users for business operations? 4. How do regulatory and legal policies influence data archiving requirements? 5. What frequency will be necessary when archiving and purging? 6. How will the frequency affect business operations?

How can you decrease report runs to increase performance?

In order to decrease report runs, a dashboard can be used. A dashboard refreshes data for everyone who has access to it whenever anyone clicks refresh. Using a dashboard will require fewer report runs.

What can you do if you need to delete a large number of records and aggregate them into a custom object automatically?

In order to delete a large number of records and aggregate them into a custom object automatically, schedulable batch Apex can be utilized. A job can be scheduled to run each month to identify records, aggregate totals into a custom object and then delete them.

How can you determine the size of an object?

In order to determine the size of an object, from setup, go to Storage Usage in order to see each object's storage value in the Current Data Storage Usage section.

In order to improve report performance when filtering, what is something you can do?

In order to improve report performance, filter your reports on fields which are optimized for search (Indexed) like the standard fields: Id, Name, OwnerId, CreatedDate, SystemModStamp, RecordType, Master-Detail fields, and lookup fields. Custom fields are optimized for search if External ID, or Unique is selected for that field. Salesforce support can also be contacted in order to create a custom index on a field.

How can you improve the performance of SOQL queries?

In order to improve the performance of SOQL queries, selective filter conditions should be used. A query is considered selective if it contains a filter that uses a custom or standard indexed field, and the number of records returned by it is less than the system defined threshold.

How can you optimize the performance of Visualforce pages?

In order to optimize the performance of Visualforce pages: * Use filters to limit the data that SOQL calls and custom controllers return * Use AND statements in the WHERE clause in order to limit the returned data * Remove null results by using != null * Pagination can be used with the SOQL OFFSET clause to return a specific subset of results * The with sharing keyword can be declared in order to only return the records that the current user can access

What is the Transaction/Centralized style of MDM?

In the Transaction/Centralized style of MDM, a central hub is the single provider of all master data. The master data attributes are stored and maintained using linking, cleansing, and matching and enriching algorithms, which enhances the data. The enhanced data is then published back down to the respective source systems.

In the case of multiple enterprise systems and data integration, what is the best approach to establishing the system of record?

In the case of multiple enterprise systems and data integrations, the best approach for establishing the system of record is to review data flows with business users.

What is a nuance when you are using a deterministic methodology for matching and you have several data elements being used for matching?

In the deterministic methodology, the number of data elements and the complexity of matching and scoring rules can seriously impact performance.

How can you assess the accuracy of your data?

Install a data quality app from the AppExchange. It can match your records against a trusted source and tell you how your data can be improved.

Can you merge contacts that have different primary accounts?

Lightning Experience allows for merging contacts that have different primary accounts, unless the contact record is associated with a portal user. Salesforce classic only allows merging duplicate contacts that have the same primary account.

What is a benefit to using a list custom setting?

List custom settings can be used to create a reusable set of static data that can be accessed across a Salesforce org. SOQL queries are not required to access list custom settings data, which make it a low cost and efficient solution for storing and accessing reusable data.

How can you assess the completeness of your data?

List the fields required for each business use, then run a report that shows you the percentage of blanks for these fields. Alternatively, you can use a data quality app from the AppExchange.

What is Master Data Management (MDM)?

MDM is the technology, tools, and processes that ensure master data is coordinated across the enterprise. MDM provides a unified master data service that provides accurate, consistent, and complete master data across the enterprise to business partners.

What are Master Data Management Tools used for?

Master Data Management (MDM) focuses on the consistency of data across systems and apps. In specific, MDM tools provide: *Record creation assistance *Record maintenance across systems *Integration Management *Regulatory and compliance enforcement *Enterprise-wide visibility

What are some Salesforce methods of managing the creation of duplicate records?

Matching rules are used to determine how records are matched to identify duplicates. Duplicate rules can be used to alert and block users from creating duplicate records, and duplicate jobs can be run to find duplicates across the org. Reports can be used to share duplicate job results with users.

How do you perform data classification?

Metadata can be identified and categorized based on several different things like risk levels, and required security controls.

What are metadata types, and what can they be used for?

Metadata types can be used to define the components and document data architecture. Each metadata type also has several fields that represent related metadata components, which can be included in the definition of the metadata type like CustomField, BusinessProcess, RecordType, and ValidationRule, which can also be used for documentation.

What are the three traditional survivorship techniques?

Most Recent Methodology - Records are ordered from most recent to least recent, and the most recent record is considered the eligible survivor. Most Frequent - Matches the records containing the same information as an indication of their correctness Most Complete - Field completeness is the primary factor of correctness, i.e. records with the most values populated are considered the most viable candidates of survivorship.

What are a few data standards, and how do you measure/track them?

Naming - set naming conventions for records. Always include suffixes, abbreviations? Formatting - figure out how dates and money are represented - is it the same always? Workflow - determine processes for record creation, reviewing, updating, and archiving. Determine the stages that a record goes through in its lifetime. Quality - set appropriate standards for data quality, including the ability to measure and score records. Records and Ownership - Determine who owns a record, who's accountable for changes to data and who is notified when there are changes to data. Security and Permissions - determine the appropriate levels of privacy for data. Monitoring - Outline a process for ensuring quality control of data.

Does Salesforce offer a native solution for backing up data every day?

No, Salesforce does not offer a native solution for backing up data every day automatically. The data export option allows for scheduling weekly or monthly exports of data.

How much data and file storage are orgs allocated?

Orgs are allocated 10 GB of data storage, plus incrementally added user storage. Orgs are allocated 10GB of file storage and are allocated additional file storage based on the number of standard user licenses. Orgs of the Enterprise, Performance, and Unlimited Editions get an additional 2 GB of file storage per user license, while the Contract Manager, Group & Professional Edition Orgs are allocated 612 MB per standard user license.

How much big object record storage are org's allotted?

Orgs are allotted 1 million big object records.

What is PK Chunking?

PK Chunking (or Primary Key Chunking) is a strategy for querying large data sets. PK Chunking is a feature of the Bulk API that splits a query into chunks of records with sequential record Ids (i.e. the Primary Keys). Ids are always indexed, so this is an efficient method for querying large data sets.

What is PK Chunking and when should you use it?

PK Chunking stands for Primary Key (Which is the object's record Id). PK Chunking splits bulk queries on very large tables into chunks based on the record IDs of the queried records. You should use enable PK Chunking when querying tables with more than 10 million records of when a bulk query consistently times out.

What is parallel recalculation used for?

Parallel Recalculation will allow updated to sharing rules to be processed in multiple threads in parallel, speeding the time to complete up.

What is parallel load?

Parallel load refers to loading data in parallel mode in order to process multiple batches of records concurrently. Loading records in parallel mode provides the maximum degree of parallelism and performance.

What are person accounts and what are a few key behaviors?

Person accounts are accounts that can also be used as contacts in many situations. Person accounts can't be included in account hierarchies. Person accounts can be associated with events or tasks by eithe using the Name or Related To fields. Person accounts can be enabled as users for customer communities and portals, but cannot be enabled as users for partner communities or portals. In order to change a record type from business account to person account or vice versa - you must use the API to do so. Person accounts count against both account and contact storage as each person account consists of one account and one contact.

What is Probabilistic Matching?

Probabilistic Matching uses a statistical approach in measuring the probability that two customer records represent the same individual. It is made to work on a wider set of data elements to be used for matching. It uses weights on the fields to calculate match scores, and it uses thresholds to determine a match, non-match, or possible match.

What does processing more batches concurrently mean?

Processing more batches concurrently means having a higher degree of parallelism, and having a higher degree of parallelism means having better data throughput.

What are the levels of visibility for custom settings?

Protected and Public.

What can be used in order to report on historical trends?

Reporting snapshots can be used to report on historical trends. The data of a source report can be stored in a target custom object by scheduling a report to run at a specific time. Once a reporting snapshot has been set up, custom reports can be created and run from that object.

How can you assess the age of your data?

Run a report on the Last Modified Date of Records. What percentage of records have been updated recently?

How can you assess the consistency of your data?

Run a report to show the values used for dates, currency, state, country, and region fields. How many variations are used for a single value?

Why must SOQL queries be selective?

SOQL queries must be selective, particularly for queries inside triggers for best performance. To avoid long execution times, non-selective SOQL queries may be terminated by the system.

What can you use Salesforce Connect to do?

Salesforce Connect can be used in order for Salesforce Users to create/view/edit object records in an external system. An external data source and external object can be defined for this.

What does enabling "Create Audit Fields" feature in Salesforce do?

Salesforce has the ability to set system fields through the API. When migrating data from an external system, the API lets you set a number of fields on objects that were previously read only. By setting these fields, records will appear to have been created at the time they were originally created in your old systems. Audit fields can only be set on create.

What is deferred sharing used for?

Sharing calculations can be deferred prior to a data load to minimize the data loading time. Calculation of the sharing rules are temporarily suspended so that it does not affect the performance of a data load. Deferred sharing rule calculations should be preferred when making a large number of changes to roles or users.

What are some data archiving approaches?

Some data archiving approaches are: * Data Export & Deletion - Tools such as the Data Loader and third-party ETL tools can be used to export and delete a large number of records using the Bulk API automatically on a regular basis * Scheduling Exports - The data export option offered by Salesforce allows scheduling a weekly or monthly export of Salesforce data, which can be stored in a data warehouse * Storing field history data - An Apex trigger can automatically create a copy of field values when a record is created or updated by a user. The copy can be stored in a custom object for reporting *Storing summarized data - reporting snapshots can store summarized data in a custom object for historical reporting. Schedulable batch Apex can also aggregate and store data

What API can be utilized to query large sets of data and delete them?

The BULK API 2.0 can be utilized to query large sets of data and delete it, the API calls do not count against the number of SOAP or REST API calls in an org.

What is the BULK API optimized for?

The BULK API is optimized for processing data that contains up to a million records. It was specifically developed to simplify and optimize the process of loading or deleting large amounts of data.

When importing or exporting a large number of Salesforce records, how can you ensure max performance?

The Bulk API can be used in parallel mode in order to ensure max performance while importing or expoting a large number of Salesforce records.

What is the Coexistence style of MDM?

The Coexistence style of MDM is used when the master data is stored in the central master data system and updated in the source systems as well. It is an expensive approach since master data changes can happen in the MDM system as well as source systems. Data is mastered in the source systems and then synchronized with the central hub.

What is the Consolidation style of MDM Implementation?

The Consolidation Approach uses a central hub for storing the golden record, which is used for reporting and reference. This style is non-intrusive since data in the source systems are not modified. Rather the master data is consolidated from various systems, cleansed, matched, and integrated to offer a complete single record. It provides a trusted source of data for reporting and analytics.

Which MDM Implementation style can be used to consolidate data from various source systems in a central hub as well as ensure the data in source systems is not modified?

The Consolidation Method.

If you have inconsistencies in addresses, what Salesforce tool can you use?

The Mass Update Addresses tool can be used to resolve any inconsistencies in addresses.

What can be used in order to retrieve, deploy, create, update or delete customizations in Salesforce?

The Metadata API can be used with the Force.com IDE or Visual Studio Code in order to retrieve, deploy, create, update or delete customization information such as custom objects, custom fields, and page layouts.

What Lightning Component allows merging duplicate contact records in Lightning?

The Potential Duplicates Lightning Component.

What is the RACI Model useful for?

The RACI model can be used in order to outline the users that are Responsible, Accountable, Consulted, and Informed for the different types of data. Responsible users own the data, Accountable users must approve changes to the data, Consulted users can provide information about the data, and Informed users need to be notified about any changes to the data.

What function can be used in a validation rule to enforce the formatting of a field value?

The REGEX function can be used in a validation rule to enforce the formatting of data entered in a field.

What is the Registry style of MDM implementation?

The Registry Style is typically used when there are various source systems and there is a need to identify duplicates by running cleansing and matching algorithms from the source system. Unique global identifiers are assigned to matching records to help identify a single version of the truth. Although non-intrusive, it does not utilize a central hub.

What is the SOAP API optimized for?

The SOAP API is optimized for real-tome client applications that update a few records at a time. SOAP API required development to implement complex processes to upload data into bite-sized chunks, monitor results, and retry failed records. This method is acceptable for small data loads but not for large.

What can the Schema Builder be used for in Salesforce?

The Schema Builder can be used to view, create, and update the data model in Salesforce.

How can you use PK Chunking?

The Sforce-Enable-PKChunking header can be specified on the job request to utilize PK Chunking.

What is the term used for the authoritative data source for a given piece of information in an organization that uses multiple enterprise systems.

The System of Record (SOR).

What tool can be used in order to translate text into other languages?

The Translation Workbench can be used to translate text into other languages.

What is the canonical model used for?

The canonical model is used for communication between various enterprise systems. The design pattern is a form of enterprise application architecture that can be used for communication between different data formats. It involves creating a data model which supersedes all the others and creating a 'translator' module or layer to/from which all existing systems exchange data with other systems.

What is needed to document an integration between systems?

The data model and integration specifications can be utilized to document the integration between systems.

What are the different data governance models?

The different data governance models are as follows: * Decentralized/Bottom Up - the model focuses on decentralized execution of rules, standards, policies, and procedures. Each user is responsible for data quality maintenance while following their own processes. The data governance council plays a limited role. * Decentralized/Bottom Up (Cross Functional) - Focuses on a decentralized approach in which each user is responsible for data quality maintenance, but data is shared with other business units which requires additional considerations. * Centralized/Top Down - Focuses on a centralized approach in which the creation of rules, standards, and policies is done by a central data governance body, which is also responsible for data quality maintenance. Individual users consume data and submit maintenance requests. * Hybrid - Focuses on the creations of all rules, standards, and policies by a central data governance body, and their execution by individual business users based on their own processes. The central body acts as a mentor, and also maintains some data.

What are the different styles of MDM Implementation?

The different styles of MDM implementation are Registry, Consolidation, Coexistence, and Transaction/Centralized.

What are options available to customers as a method of backup of data?

The following options are available to customers as a method to backup their data: * Data Export Service - Manual or scheduled exports of your data via the UI * Data Loader - Manual on-demand exports of your data via the API *Report Export - Manual on-demand exports of your data via reports

When merging two or more records from different systems with different values of fields, what are the factors that determine which value should win?

The factors that should assist with the determining the value that should win are trust score, decay, validations, and precedence. Trust score - calculated based on how much a particular data source or system is trusted. Decay - rate at which the trust score may decrease over time. Validations - defined on the data attributes to evaluate and determine the trust score. Precedence - determines the field value that should survive if the trust score for the systems is the same.

What are the types of backups that APIs will allow you to set up?

The four APIs (REST, SOAP, Bulk, Metadata) will allow you to set up a full, incremental, or partial backup.

What are the pros and cons to doing a full backup?

The full backup contains all data. The pros are that is contains all the information that may be required (in case of data loss), and the cons are that it represents a large volume and takes more time to retrieve a subset of data to handle a data incident.

What is the Golden Record/Single Source of Truth (SSOT)?

The golden or single source of truth represents the location of a certain data element where it is mastered. It is typically the master hub that is used for data consolidation in a master data management system. An SSOT system provides authentic and relevant data that other systems in an org can refer to.

What does a list custom setting type do?

The list custom setting type defines app-level data, like country or state codes and provides a reusable set of data frequently used within your application, putting that data in a list custom setting streamlines access to it. Data in lists do not vary by profile or user, but are available organization-wide.

How does the probabilistic method of matching optimize searching and matching?

The probabilistic methodology uses hashed values of the data to optimize searching and matching, however, there is the added overhead of deriving and persisting the hashed values when updating or adding new data.

What is a System of Record (SOR)?

The system of record is defined as the enterprise system that acts as the most authoritative source for a given piece of data. It is the primary repository of correct, durable, restorable and disaster-ready data.

What are the two types of Big Objects?

The two types of Big Objects are: *Standard Big Objects - Objects defined by Salesforce and included in Salesforce products *Custom Big Objects - New objects you create to store information unique to your org.

What must you do in order to set up the "Create Audit Field" feature?

To setup the "Create Audit Fields" feature, a system admin must first go to "User Interface" in the setup menu and select the checkbox for "Set Audit Fields Upon Record Creation" and the "Update Records with Inactive Owners". After doing so you can assign the ability through the user's profile or through a permission set.

What is truncating?

Truncating is a fast way to permanently remove all records from a custom object, while keeping the object and its metadata intact for future use. Truncation can only be used on custom object, and it the object is referenced by another object through a lookup field, or is on the master side of an M-D Relationship, referenced in a reporting snapshot, has a custom index or external ID, or has a skinny table, you are not allowed to truncate the object.

How many duplicate records can be merged into a single record?

Up to three duplicate records can be manually merged into a single record.

How can you assess your data for duplication?

Use the duplicate management features in Salesforce, and install a duplicate detection app from the AppExchange.

When uploading records via the Bulk API, how many records can a single batch contain?

When loading records via the Bulk API, a single batch cannot contain more than 10,000 records.

When setting quality standards, what should you do?

When setting appropriate standards for data quality, you should include the ability to measure and score records. Put a value on the different components of data quality (Completeness, usage, accuracy, etc), and with an emphasis on sustainability of your plan, think about how you can incorporate visible data quality measurements within your applications to help users support the standards.

What does setting a custom setting as protected do?

When you set a custom setting as protected, if the setting is contained in a managed package, subscribing orgs can't see the custom setting, it doesn't display as part of the package list. They also cannot access the setting using Apex or the API. If contained in an unmanaged package, the custom setting is available to all.

What does setting a custom setting as public do?

When you set the custom setting as public, the custom setting is available through the Enterprise WSDL like any custom object. You can package custom settings defined as public. The subscribing org can then edit the values and access them using Apex and the SOAP API, regardless of the type of package - managed or unmanaged.

Describe the process for uploading records using the Bulk API.

When you upload records using the Bulk API, the records are streamed to force.com to create a new job. As the data comes in for the job, it is stored in temporary storage and then sliced into user-defined batches (10,000 records max). Batches can then be processed in parallel or serially depending on your needs. The API logs the status of each job and tries to reprocess failed records automatically, and if a job times out, the Bulk API automatically puts it back in the queue and retries it for you.

When assessing the state of your data - which questions should you ask?

Who is using the customer data? What are the business needs for data? Which data is used the most? How is the data being used?

What is Field History Tracking?

With Field History Tracking, you can select certain fields to track and display the field history in the History related list of an object. Field history data is retained up to 18 months through your org, and 24 months via the API.

What is a custom setting?

With custom settings you can create a custom set of data. Custom settings cannot be migrated via a change set. When defining the custom setting, in the setting type you choose list or hierarchy.

With optional lookup fields, how can you avoid locks during data migration?

With optional lookup fields, you can avoid locks by setting the "Clear the value of this field" option, which whenever a record that has a lookup field to this lookup record is inserted or updated, Salesforce doesn't lock the lookup record, instead it only validates that the lookup record exists.

What can you do if you need to track the values of more that 60 fields?

You can use an Apex trigger and a custom object to store the old and new values in a custom object.

What can you use in order to send information to an external system when the value of a field changes?

You can use an outbound message in order to send information to an external system when the value of a field changes. You would complete this using a workflow rule that uses an 'outbound message', and select the fields that should be sent as a part of the outbound message.

What can use event monitoring to do?

You can use event monitoring in order to: *Monitor data loss *Increase adoption *Optimize performance

What API should you use if you need to backup an object that is containing a large volume of records, or do a backup that raises performance challenges?

You should use the Bulk API as it will generally be faster than any other APIs. The REST and SOAP APIs may sometimes get better results though depending on other factors.

What API should you use if you need to preserve governor limits regarding the number of API calls, and why?

You should use the Bulk API because it does not consume API calls, but consumes Bulk API calls, which are less restrictive.

What API should you use if you need to backup Metadata?

You should use the Metadata API as it is by far the most exhaustive API to retrieve Metadata. A large part of Metadata is also available on the REST, SOAP and Tooling APIs.

What API should you use if you need to backup an object that contains a lot of XML like information?

You should use the REST or Bulk.

What API should you use if you need to backup files?

You should use the REST or SOAP API as the Bulk API does not yet support Base64 fields.

What API should you use if you need to preserve the governor limit regading number of batches per rolling 24 hour period?

You should use the REST or SOAP API as they are not subject to Batch-specific governor limits, however they do have their own limits.

What API should you use if you need to backup an object that is not yet supported by the Bulk API?

You should use the REST or SOAP API as they generally support all queryable objects.


Conjuntos de estudio relacionados

NJN History: Mesopotamia: Return to Eden

View Set

World History Chapter Eight Unit Three

View Set