Salesforce Integration Architecture Designer
Remote Process Invocation—Fire and Forget: Security Considerations - Apex callouts
A call to a remote system must maintain the confidentiality, integrity, and availability of the request. The following are security considerations specific to using Apex SOAP and HTTP calls in this pattern. One-way SSL is enabled by default, but two-way SSL is supported with both self-signed and CA-signed certificates to maintain authenticity of both the client and server. Salesforce does not support WS-Security when generating the Apex proxy class. Where necessary, consider using one-way hashes or digital signatures using the Apex Crypto class methods to ensure the integrity of the request. The remote system must be protected by implementing the appropriate firewall mechanisms.
Batch Data Synchronization: Example
A utility company uses a mainframe-based batch process that assigns prospects to individual sales reps and teams. This information needs to be imported into Salesforce on a nightly basis. The customer has decided to implement change data capture on the source tables using a commercially available ETL tool. The solution works as follows: A cron-like scheduler executes a batch job that assigns prospects to users and teams. After the batch job runs and updates the data, the ETL tool recognizes these changes using change data capture. The ETL tool collates the changes from the data store. The ETL connector uses the Salesforce SOAP API to load the changes into Salesforce.
Remote Process Invocation—Request and Reply: Describer a synchronous remote process invocation using Apex calls.
An action is initiated on the Visualforce or Lightning page (for example, a button click). The browser (via a client-side controller in the case of a Lightning component) performs an HTTP POST that in turn performs an action on the corresponding Apex controller. The controller invokes the actual call to the remote web service. The response from the remote system is returned to the Apex controller. The controller processes the response, updates data in Salesforce as required, and re-renders the page.
Remote Process Invocation—Fire and Forget: Error Handling and Recovery
An error handling and recovery strategy must be considered as part of the overall solution. The best method depends on the solution you choose. Apex callouts vs Outbound messaging vs Platform Events
What are three categories of integration?
Data Integration—These patterns address the requirement to synchronize data that resides in two or more systems so that both systems always contain timely and meaningful data. Process Integration—The patterns in this category address the need for a business process to leverage two or more applications to complete its task. Virtual Integration—The patterns in this category address the need for a user to view, search, and modify data that's stored in an external system.
What is Data Integration Pattern?
Data Integration—These patterns address the requirement to synchronize data that resides in two or more systems so that both systems always contain timely and meaningful data. Data integration is often the simplest type of integration to implement, but requires proper information management techniques to make the solution sustainable and cost effective. Such techniques often include aspects of master data management (MDM), data governance, mastering, deduplication, data flow design, and others.
What is Batch Data Synchronization?
Data stored in Lightning Platform is created or refreshed to reflect updates from an external system, and when changes from Lightning Platform are sent to an external system. Updates in either direction are done in a batch manner.
What is Remote Call-In?
Data stored in Lightning Platform is created, retrieved, updated, or deleted by a remote system.
Remote Process Invocation—Fire and Forget: Data Volumes
Data volume considerations depend on which solution you choose. For the limits of each solution, see the Salesforce Limits Quick Reference Guide.
What are some considerations for Remote Process Invocation—Request and Reply?
Does the call to the remote system require Salesforce to wait for a response before continuing processing? Is the call to the remote system a synchronous request-reply or an asynchronous request? If the call to the remote system is synchronous, does Salesforce have to process the response as part of the same transaction as the initial call? Is the message size small or large? Is the integration based on the occurrence of a specific event, such as a button click in the Salesforce user interface, or DML-based events? Is the remote endpoint able to respond to the request with low latency? How many users are likely to be executing this transaction during a peak period?
Remote Process Invocation—Fire and Forget: Governor Limits
Due to the multitenant nature of the Salesforce platform, there are limits to outbound callouts. Limits depend on the type of outbound call and the timing of the call. In case of platform events, different limits and allocations apply. See the Platforms Events Developers Guide. There are no governor limits for outbound messaging. See the Salesforce Limits Quick Reference Guide.
Remote Process Invocation—Request and Reply: Enhanced External Services invokes a REST API call
Enhanced External Services allows you to invoke an externally hosted service in a declarative manner (no code required). This feature is best used when the following conditions are met: The externally hosted service is a RESTful service and the definitions are available in an OpenAPI 2.0 JSON schema format. The request and response definitions contain primitive data types such as boolean, datetime, double, integer, string, or an array of primitive data types. Nested object types, and send parameters such as headers within the HTTP requests are supported. The transaction can be invoked from a Lightning Flow. The integration transaction doesn't risk exceeding the synchronous Apex governor limits.
Batch Data Synchronization: Good Solution when Salesforce is the Data Master = Replication via third-party ETL tool
Leverage a third-party ETL tool that allows you to run change data capture against ERP and Salesforce data sets. In this solution, Salesforce is the data source, and you can use time/status information on individual rows to query the data and filter the target result set. This can be implemented by using SOQL together with SOAP API and the query() method, or by using SOAP API and the getUpdated() method.
Batch Data Synchronization: Best Solution when Remote System is the Data Master = Replication via third-party ETL tool
Leverage a third-party ETL tool that allows you to run change data capture against source data. The tool reacts to changes in the source data set, transforms the data, and then calls Salesforce Bulk API to issue DML statements. This can also be implemented using the Salesforce SOAP API.
What is Long Polling?
Long polling, also called Comet programming, emulates an information push from a server to a client. Similar to a normal poll, the client connects and requests information from the server. However, instead of sending an empty response if information isn't available, the server holds the request and waits until information is available (an event occurs). The server then sends a complete response to the client. The client then immediately re-requests information. The client continually maintains a connection to the server, so it's always waiting to receive a response. If the server times out, the client connects again and starts over. NOTE: The Salesforce Streaming API uses the Bayeux protocol and CometD for long polling. Bayeux is a protocol for transporting asynchronous messages, primarily over HTTP. CometD is a scalable HTTP-based event routing bus that uses an AJAX push technology pattern known as Comet. It implements the Bayeux protocol.
What is Mediation Routing?
Mediation routing is the specification of a complex "flow" of messages from component to component. For example, many middleware-based solutions depend on a message queue system. While some implementations permit routing logic to be provided by the messaging layer itself, others depend on client applications to provide routing information or allow for a mix of both paradigms. In such complex cases, mediation (on the part of middleware) simplifies development, integration, and validation).
Remote Process Invocation—Fire and Forget: Good Solution - Workflow-driven outbound messaging
No customization is required in Salesforce to implement outbound messaging. The recommended solution for this type of integration is when the remote process is invoked from an insert or update event. Salesforce provides a workflow-driven outbound messaging capability that allows sending SOAP messages to remote systems triggered by an insert or update operation in Salesforce. These messages are sent asynchronously and are independent of the Salesforce user interface. The outbound message is sent to a specific remote endpoint. The remote service must be able to participate in a contract-first integration where Salesforce provides the contract. On receipt of the message, if the remote service doesn't respond with a positive acknowledgment, Salesforce retries sending the message, providing a form of guaranteed delivery. When using middleware, this solution becomes a "first-mile" guarantee of delivery.
Remote Process Invocation—Fire and Forget: Best Solution - Process-driven platform events
No customization is required in Salesforce to implement platform events. The recommended solution is when the remote process is invoked from an insert or update event. Platform events are event messages (or notifications) that your apps send and receive to take further action. Platform events simplify the process of communicating changes and responding to them without writing complex logic. One or more subscribers can listen to the same event and carry out actions. For example, a software system can send events containing information about printer ink cartridges. Subscribers can subscribe to the events to monitor printer ink levels and place orders to replace cartridges with low ink levels. External apps can listen to event messages by subscribing to a channel through CometD. Platform apps, such as Visualforce pages and Lightning components, can subscribe to event messages with CometD as well.
Remote Process Invocation—Fire and Forget: Good Solution - Outbound messaging and callbacks
Callbacks provide a way to mitigate the impacts of out-of-sequence messaging. In addition, they handle these scenarios. Idempotency— If an acknowledgment isn't received in a timely fashion, outbound messaging performs retries. Multiple messages can be sent to the target system. Using a callback ensures that the data retrieved is at a specific point in time rather than when the message was sent. Retrieving more data—A single outbound message can send data only for a single object. A callback can be used to retrieve data from other related records, such as related lists associated with the parent object. The outbound message provides a unique Session ID that you can use as an authentication token to authenticate and authorize a callback with either the SOAP API or the REST API. The system performing the callback isn't required to separately authenticate to Salesforce. The standard methods of either API can then be used to perform the desired business functions. A typical use of this variant is the scenario in which Salesforce sends an outbound message to a remote system to create a record. The callback updates the original Salesforce record with the unique key of the record created in the remote system.
Remote Process Invocation—Fire and Forget: Suboptimal Solution - Batch Apex job that performs an Apex SOAP or HTTP asynchronous callout
Calls to a remote system can be performed from a batch job. This solution allows for batch remote process execution and for processing of the response from the remote system in Salesforce. However, there are limits to the number of calls for a given batch context. For more information, see the Salesforce Developer Limits and Allocations Quick Reference.
Remote Process Invocation—Fire and Forget: Forces
Consider the following forces when applying solutions based on this pattern. Does the call to the remote system require Salesforce to wait for a response before continuing processing? Is the call to the remote system synchronous or asynchronous? If the call to the remote system is synchronous, does the response need to be processed by Salesforce as part of the same transaction as the call? Is the message size small? Is the integration based on the occurrence of a specific event, such as a button click in the Salesforce user interface, or DML-based events? Is guaranteed message delivery from Salesforce to the remote system a requirement? Is the remote system able to participate in a contract-first integration in which Salesforce specifies the contract? In some solution variants (for example, outbound messaging), Salesforce specifies a contract that the remote system endpoint implements. Does the endpoint or the Enterprise Service Bus (ESB) support long polling? Are declarative configuration methods preferred over custom Apex development? In this case, solutions such as platform events are preferred over Apex callouts.
Remote Process Invocation—Request and Reply: Endpoint Capability and Standards Support
The capability and standards support for the endpoint depends on the solution that you choose. Apex SOAP callouts - The endpoint must be able to receive a web service call via HTTP. Salesforce must be able to access the endpoint over the public Internet. This solution requires that the remote system is compatible with the standards supported by Salesforce. At the time of writing, the web service standards supported by Salesforce for Apex SOAP callouts are: WSDL 1.1, SOAP 1.1, WSI-Basic Profile 1.1, and HTTP Apex HTTP callouts - The endpoint must be able to receive HTTP calls. Salesforce must be able to access the endpoint over the public Internet. You can use Apex HTTP callouts to call REST services using the standard GET, POST, PUT, and DELETE methods.
Remote Process Invocation—Request and Reply: Middleware Capabilities
The following highlights the desirable properties of a middleware system that participates in this pattern. Event handling - Desirable Protocol conversion - Desirable Translation and transformation - Desirable Queuing and buffering - Desirable Synchronous transport protocols - Mandatory Asynchronous transport protocols - Not Required Mediation routing - Desirable Process choreography and service orchestration - Desirable Transactionality (encryption, signing, reliable delivery, transaction management) - Desirable Routing - Not Required Extract, transform, and load - Not Required Long polling - Not Required
Remote Process Invocation—Fire and Forget: Middleware Capabilities
The following highlights the desirable properties of a middleware system that participates in this pattern. Event handling - Desirable Protocol conversion - Desirable Translation and transformation - Desirable Queuing and buffering - Mandatory Synchronous transport protocols - Not Required Asynchronous transport protocols - Mandatory Mediation routing - Desirable Process choreography and service orchestration - Desirable Transactionality (encryption, signing, reliable delivery, transaction management) - Mandatory Routing - Not Required Extract, transform, and load - Not Required Long Polling - Mandatory (required for platform events)
Batch Data Synchronization: Security Considerations
Any call to a remote system must maintain the confidentiality, integrity, and availability of the request. Different security considerations apply, depending on the solution you choose. A Lightning Platform license with at least "API Only" user permissions is required to allow authenticated API access to the Salesforce API. We recommend that standard encryption be used to keep password access secure. Use the HTTPS protocol when making calls to the Salesforce APIs. You can also proxy traffic to the Salesforce APIs through an on-premises security solution, if necessary.
Remote Process Invocation—Fire and Forget: Security Considerations
Any call to a remote system must maintain the confidentiality, integrity, and availability of the request. Different security considerations apply, depending on the solution you choose. Apex callouts vs Outbound Messaging vs Platform Events
Remote Process Invocation—Request and Reply: Security Considerations
Any call to a remote system must maintain the confidentiality, integrity, and availability of the request. The following security considerations are specific to using Apex SOAP and HTTP calls in this pattern. One-way SSL is enabled by default, but two-way SSL is supported with both self-signed and CA-signed certificates to maintain authenticity of both the client and server. Salesforce currently doesn't support WS-Security. Where necessary, consider using one-way hashes or digital signatures using the Apex Crypto class methods to ensure request integrity. The remote system must be protected by implementing the appropriate firewall mechanisms.
Remote Process Invocation—Fire and Forget: Endpoint Capability and Standards Support - The capability and standards support for the endpoint depends on the solution that you choose.
Apex SOAP callouts - The endpoint must be able to process a web service call via HTTP. Salesforce must be able to access the endpoint over the public Internet. Apex HTTP callouts - The endpoint must be able to receive HTTP calls and be accessible over the public internet by Salesforce. Apex HTTP callouts can be used to call RESTful services using the standard GET, POST, PUT, and DELETE methods. Outbound message - The endpoint must be able to implement a listener that can receive SOAP messages in predefined format sent from Salesforce. The remote listener must participate in a contract-first implementation, where the contract is supplied by Salesforce. Each outbound message has its own predefined WSDL. Platform Events - Triggers, processes and flows can subscribe to events. You can receive event notifications regardless of how they were published. Apex, APIs, flows, or other processes that can receive event notifications all provide an auto subscription mechanism. Use CometD to subscribe to platform events from an external client. Implement your own CometD client or use EMP Connector, an open-source, community-supported tool that implements all the details of connecting to CometD and listening on a channel. Salesforce sends platform events to CometD clients sequentially in the order they're received. The order of event notifications is based on the replay ID of events.
Remote Process Invocation—Fire and Forget: Complex Integration Scenarios - Each solution in this pattern has different considerations for complex integration scenarios such as transformation and process orchestration.
Apex callouts - In certain cases, solutions prescribed by this pattern require implementing several complex integration scenarios best served using middleware or having Salesforce call a composite service. These scenarios include: Orchestration of business processes and rules involving complex flow logic. Aggregation of calls and their results across calls to multiple systems. Transformation of both inbound and outbound messages. Maintaining transactional integrity across calls to multiple systems. Outbound messaging - Given the static, declarative nature of the outbound message, no complex integration scenarios, such as aggregation, orchestration, or transformation, can be performed in Salesforce. The remote system or middleware must handle these types of operations. Platform Events - Given the static, declarative nature of events, no complex integration scenarios, such as aggregation, orchestration, or transformation can be performed in Salesforce. The remote system or middleware must handle these types of operations
Remote Process Invocation—Fire and Forget: Reliable Messaging - Reliable messaging attempts to resolve the issue of guaranteeing the delivery of a message to a remote system in which the individual components are unreliable. The method of ensuring receipt of a message by the remote system depends on the solution you choose.
Apex callouts - Salesforce doesn't provide explicit support for reliable messaging protocols (for example, WS-ReliableMessaging). We recommend that the remote endpoint receiving the Salesforce message implement a reliable messaging system, like JMS or MQ. This system ensures full end-to-end guaranteed delivery to the remote system that ultimately processes the message. However, this system doesn't ensure guaranteed delivery from Salesforce to the remote endpoint that it calls. Guaranteed delivery must be handled through customizations to Salesforce. Specific techniques, such as processing a positive acknowledgment from the remote endpoint in addition to custom retry logic, must be implemented. Outbound messaging - Outbound messaging provides a form of reliable messaging. If no positive acknowledgment is received from the remote system, the process retries for up to 24 hours. This process guarantees delivery only to the point of the remote listener. The retry interval increases exponentially over time, starting with 15-second intervals and ending with 60-minute intervals. The overall retry period can be extended to seven days by request to Salesforce support, but automatic retries are limited to 24 hours. All failed messages after 24 hours are placed in a queue and administrators must monitor this queue for any messages exceeding the 24-hour delivery period and retry manually, if necessary. In most implementations, the remote listener calls another remote service. Ideally, the invocation of this remote service through a reliable messaging system ensures full end-to-end guaranteed delivery. The positive acknowledgment to the Salesforce outbound message occurs after the remote listener has successfully placed its own message on its local queue. Once the positive acknowledgment is received by Salesforce, automatic retries are stopped. Platform Events - Platform events provide a form of reliable messaging. Salesforce pushes the event to the subscribers. If the message doesn't get picked up by the subscriber, the subscriber may choose to replay the messages using the replay ID of previously received events.
What is Asynchronous transport protocols?
Asynchronous transport protocols refer to protocols supporting activities wherein one thread in the caller sends the request message and sets up a callback for the reply. A separate thread listens for reply messages. When a reply message arrives, the reply thread invokes the appropriate callback, which reestablishes the caller's context and processes the reply. This approach enables multiple outstanding requests to share a single reply thread.
Remote Process Invocation—Fire and Forget: Results
The application of the solutions related to this pattern allows for: User interface-initiated remote process invocations in which the result of the transaction can be displayed to the end user DML event-initiated remote process invocations in which the result of the transaction can be processed by the calling process
Remote Process Invocation—Fire and Forget: Outbound messaging
Error handling—Because this pattern is asynchronous, the remote system handles error handling. For outbound messaging, Salesforce initiates a retry operation if no positive acknowledgment is received within the timeout period, for up to 24 hours. Error handling must be performed in the remote service because the message is effectively handed off to the remote system in a "fire-and-forget" manner. Recovery—Because this pattern is asynchronous, the system must initiate retries based on the service's quality-of-service requirements. For outbound messaging, Salesforce initiates retries if no positive acknowledgment is received from the outbound listener within the timeout period, up to 24 hours. The retry interval increases exponentially over time, starting with 15-second intervals and ending with 60-minute intervals. The timeout period can be extended to seven days by request to Salesforce support, but automatic retries are limited to 24 hours. All failed messages after 24 hours are placed in a queue and administrators must monitor this queue for any messages exceeding the 24-hour delivery period and retry manually, if necessary. For custom Apex callouts, a custom retry mechanism must be built if the quality-of-service requirements warrant it.
Remote Process Invocation—Fire and Forget: Platform Events
Error handling—Error handling must be performed by the remote service because the event is effectively handed off to the remote system for further processing. Because this pattern is asynchronous, the remote system handles message queuing, processing, and error handling. Additionally, platform events aren't processed within database transactions. As a result, published platform events can't be rolled back within a transaction. Recovery—Because this pattern is asynchronous, the remote system must initiate retries based on the service's quality-of-service requirements. The replay ID associated with each event is atomic and increases with every published event. This ID can be used replay the stream from a specific event (for example, based upon the last successfully captured event). High-volume platform event messages are stored for 72 hours (three days). You can retrieve past event messages when using CometD clients to subscribe to a channel.
Batch Data Synchronization: Error Handling and Recovery - Read from Salesforce using Change Data Capture
Error handling—Error handling must be performed in the remote service because the event is effectively handed off to the remote system for further processing. Because this pattern is asynchronous, the remote system handles message queuing, processing, and error handling. Additionally, Change Data Capture events aren't processed within database transactions. As a result, published events can't be rolled back within a transaction. Recovery—Because this pattern is asynchronous, the remote system must initiate retries based on the service's quality of service requirements. The replay ID associated with each Change Data Capture event is atomic and increases with every event published. This ID can be used replay the stream from a specific event (for example, based upon the last successfully captured event). High-volume platform event messages are stored for 72 hours (3 days). You can retrieve past event messages when using CometD clients to subscribe to a channel.
Batch Data Synchronization: Error Handling and Recovery - Write to Salesforce
Error handling—Errors that occur during a write operation can result from a combination of factors in the application. The API calls return a result set that consists of the information listed below. This information should be used to retry the write operation (if necessary). Record identifying information Success/failure notification A collection of errors for each record Recovery—Restart the ETL process to recover from a failed read operation. If the operation succeeds but there are failed records, an immediate restart or subsequent execution of the job should address the issue. In this case, a delayed restart might be a better solution because it allows time to triage and correct data that might be causing the errors.
Batch Data Synchronization: Error Handling and Recovery - Read from Salesforce using a 3rd party ETL system
Error handling—If an error occurs during a read operation, implement a retry for errors that aren't infrastructure-related. In the event of repeated failure, standard processing using control tables/error tables should be implemented in the context of an ETL operation to: Log the error Retry the read operation Terminate if unsuccessful Send a notification Recovery—Restart the ETL process to recover from a failed read operation. If the operation succeeds but there are failed records, an immediate restart or subsequent execution of the job should address the issue. In this case, a delayed restart might be a better solution because it allows time to triage and correct data that might be causing the errors.
Remote Process Invocation—Fire and Forget: Apex callouts
Error handling—The remote system hands off invocation of the end process, so the callout only handles exceptions in the initial invocation of the remote service. For example, a timeout event is triggered if no positive acknowledgment is received from the remote callout. The remote system must handle subsequent errors when the initial invocation is handed off for asynchronous processing. Recovery—Recovery is more complex in this scenario. A custom retry mechanism must be created if quality-of-service requirements dictate it.
Batch Data Synchronization: Error Handling and Recovery - External master system
Errors should be handled in accordance with the best practices of the master system.
What is event handling and what are the key process involved in it?
Event handling is the receipt of an identifiable occurrence at a designated receiver ("handler"). The key processes involved in event handling include: Identifying where an event should be forwarded. Executing that forwarding action. Receiving a forwarded event. Taking some kind of appropriate action in response, such as writing to a log, sending an error or recovery process, or sending an extra message. NOTE: In Salesforce integrations using middleware, the control of event handling is assumed by the middleware layer; it collects all relevant events (synchronous or asynchronous) and manages distribution to all endpoints, including Salesforce. Alternatively, this capability can also be achieved with the Salesforce enterprise messaging platform by using the event bus with platform events.
What is Extract, Transform, and Load (ETL)?
Extract, transform, and load (ETL) refers to a process that involves: Extracting data from the source systems. This typically involves data from several source systems, and both relational and non-relational structures. Transforming the data to fit operational needs, which can include data quality levels. The transform stage usually applies a series of rules or functions to the extracted data from the source to derive the data for loading into the end target(s). Loading the data into the target system. The target system can vary widely from database, operational data store, data mart, data warehouse, or other operational systems. NOTE: Salesforce now also supports Change Data Capture which is the publishing of change events which represent changes to Salesforce records. With Change Data Capture, the client or external system receives near-real-time changes of Salesforce records. This allows the client or external system to synchronize corresponding records in an external data store.
Remote Process Invocation—Fire and Forget: Security Considerations - Outbound Messaging
For outbound messaging, one-way SSL is enabled by default. However, two-way SSL can be used together with the Salesforce outbound messaging certificate. The following are some additional security considerations. Whitelist Salesforce server IP ranges for remote integration servers. Protect the remote system by implementing the appropriate firewall mechanisms.
Remote Process Invocation—Fire and Forget: Security Considerations - Platform Events
For platform events the subscribing external system must be able to authenticate to the Salesforce Streaming API. Platform events conform to the existing security model configured in the Salesforce org. To subscribe to an event, the user needs read access to the event entity. To publish an event, the user needs create permission on the event entity.
Remote Process Invocation—Request and Reply: Idempotent Design Considerations
Idempotent capabilities guarantee that repeated invocations are safe. If idempotency isn't implemented, repeated invocations of the same message can have different results, potentially resulting in data integrity issues. Potential issues include the creation of duplicate records or duplicate processing of transactions. It's important to ensure that the remote procedure being called is idempotent. It's almost impossible to guarantee that Salesforce only calls once, especially if the call is triggered from a user interface event. Even if Salesforce makes a single call, there's no guarantee that other processes (for example, middleware) do the same. The most typical method of building an idempotent receiver is for it to track duplicates based on unique message identifiers sent by the consumer. Apex web service or REST calls must be customized to send a unique message ID. In addition, operations that create records in the remote system must check for duplicates before inserting. Check by passing a unique record ID from Salesforce. If the record exists in the remote system, update the record. In most systems, this operation is termed as an upsert operation.
Remote Process Invocation—Request and Reply: Complex Integration Scenarios
In certain cases, the solution prescribed by this pattern can require the implementation of several complex integration scenarios. This is best served by using middleware or having Salesforce call a composite service. These scenarios include: Orchestration of business processes and rules involving complex flow logic Aggregation of calls and their results across calls to multiple systems Transformation of both inbound and outbound messages Maintaining transactional integrity across calls to multiple systems
Remote Process Invocation—Fire and Forget: Sketch - The following diagram illustrates a call from Salesforce to a remote system in which create or update operations on a record trigger the call.
In this scenario: A remote system subscribes to the platform event. A update or insert occurs on a given set of records in Salesforce. A Salesforce Process triggers when a set of conditions is met. This process creates a platform event. The remote listener receives the event message, and places the message on a local queue. The queuing application forwards the message to the remote application for processing. Note: In the case where the remote system must perform operations against Salesforce, you can implement an optional callback operation.
Remote Process Invocation—Request and Reply: Error Handling and Recovery
It's important to include an error handling and recovery strategy as part of the overall solution. Error handling—When an error occurs (exceptions or error codes are returned to the caller), the caller manages error handling. For example, an error message displayed on the end user's page or logged to a table requiring further action. Recovery—Changes aren't committed to Salesforce until the caller receives a successful response. For example, the order status isn't updated in the database until a response that indicates success is received. If necessary, the caller can retry the operation.
Batch Data Synchronization: Suboptimal Solution when Salesforce is the Data Master = Remote process invocation
It's possible for Salesforce to call into a remote system and perform updates to data as they occur. However, this causes considerable ongoing traffic between the two systems. Greater emphasis should be placed on error handling and locking. This pattern has the potential for causing continual updates, which has the potential to impact performance for end users.
Batch Data Synchronization: Suboptimal Solution when Remote System is the Data Master = Remote call-in
It's possible for a remote system to call into Salesforce by using one of the APIs and perform updates to data as they occur. However, this causes considerable ongoing traffic between the two systems. Greater emphasis should be placed on error handling and locking. This pattern has the potential for causing continual updates, which has the potential to impact performance for end users.
Remote Process Invocation—Fire and Forget: Idempotent Design Considerations
Platform events are only published to the bus once. There is no retry on the Salesforce side. It is up to the ESB to request that the events be replayed. In a replay, the platform event's replay ID remains the same and the ESB can try duplicate messages based on the replay ID. Idempotency is important for outbound messaging because it's asynchronous and retries are initiated when no positive acknowledgment is received. Therefore, the remote service must be able to handle messages from Salesforce in an idempotent fashion. Outbound messaging sends a unique ID per message and this ID remains the same for any retries. The remote system can track duplicate messages based on this unique ID. The unique record ID for each record being updated is also sent, and can be used to prevent duplicate record creation. The idempotent design considerations in the Remote Process Invocation—Request and Reply pattern also apply to this pattern.
Remote Process Invocation—Fire and Forget: Calling Mechanisms - The calling mechanism depends on the solution chosen to implement this pattern.
Process Builder - Used by both the process-driven and customization-driven solutions. Events trigger the Salesforce process, which can then publish a platform event for subscription by a remote system. Lightning component or Visualforce and Apex controllers - Used to invoke a remote process asynchronously using an Apex callout. Workflow rules - Used only for the outbound messaging solution. Create and update DML events trigger the Salesforce workflow rules, which can then send a message to a remote system. Apex triggers - Used for trigger-driven platform events and invocation of remote processes, using Apex callouts from DML-initiated events. Apex batch classes - Used for invocation of remote processes in batch mode.
What is Process Integration Pattern?
Process Integration—The patterns in this category address the need for a business process to leverage two or more applications to complete its task. When you implement a solution for this type of integration, the triggering application has to call across process boundaries to other applications. Usually, these patterns also include both orchestration (where one application is the central "controller") and choreography (where applications are multi-participants and there is no central "controller"). These types of integrations can often require complex design, testing, and exception handling requirements. Also, such composite applications are typically more demanding on the underlying systems because they often support long-running transactions, and the ability to report on and/or manage process state.
What is process choreography and service orchestration?
Process choreography and service orchestration are each forms of "service composition" where any number of endpoints and capabilities are being coordinated. The difference between choreography and service orchestration is: Choreography can be defined as behavior resulting from a group of interacting individual entities with no central authority. Orchestration can be defined as behavior resulting from a central conductor coordinating the behaviors of individual entities performing tasks independent of each other. NOTE: Portions of business process choreographies can be built in Salesforce workflows or using Apex. We recommend that all complex orchestrations be implemented in the middleware layer because of Salesforce timeout values and governor limits (especially in solutions requiring true transaction handling).
Specify the style of integration: Process, Data, or Virtual.
Process—Process-based integrations can be defined as ways to integrate the processing of functional flow across two or more applications. These integrations typically involve a higher level of abstraction and complexity, especially for transactionality and rollback. Data—Data integrations can be defined as the integration of the information used by applications. These integrations can range from a simple table insert or upsert, to complex data updates requiring referential integrity and complex translations. Virtual—Virtual integration can be defined as the integration where Salesforce is able to interact with data that resides in an external system without the need to replicate the data within Salesforce. This type of integration is always triggered through an event from the Salesforce platform, for example, a user action, workflow, search, updating a record, resulting in data integration with an external source in real time.
What is protocol conversion?
Protocol conversion is typically a software application that converts the standard or proprietary protocol of one device to the protocol suitable for another device to achieve interoperability. In the context of middleware, connectivity to a particular target system may be constrained by protocol. In such cases, the message format needs to be converted to or encapsulated within the format of the target system, where the payload can be extracted. This is also known as tunneling. NOTE: Salesforce doesn't support native protocol conversion, so it's assumed that any such requirements are met by either the middleware layer or the endpoint.
Remote Process Invocation—Fire and Forget: Solution Variant—Platform Events: Publishing Behavior and Transactions: Platform event messages are published either immediately or after a transaction is committed, depending on the publish behavior you set in the platform event definition. Platform events defined to be published immediately don't respect transaction boundaries, but those defined to be published after a transaction is committed do. When you define a platform event, choose one of the following publish behaviors:
Publish After Commit to have the event message published only after a transaction commits successfully. Select this option if subscribers rely on data that the publishing transaction commits. For example, a process publishes an event message and creates a task record. A second process that is subscribed to the event is fired and expects to find the task record. Another reason for choosing this behavior is when you don't want the event message to be published if the transaction fails. Publish Immediately to have the event message published when the publish call executes. Select this option if you want the event message to be published regardless of whether the transaction succeeds. Also choose this option if the publisher and subscribers are independent, and subscribers don't rely on data committed by the publisher. For example, the immediate publishing behavior is suitable for an event used for logging purposes. With this option, a subscriber might receive the event message before data is committed by the publisher transaction. One solution is to create a scheduled action in your process that publishes a platform event 0 hours after the custom object creation date. The scheduled action runs after the custom object is committed. Another option is to publish the event from a @queueable or @future Apex method, which ensures that the publish call is made only when the original transaction is committed. If the trigger's transaction is rolled back, the @queueable and @future methods aren't executed and the event isn't published.
What is Queuing and Buffering?
Queuing and buffering generally rely on asynchronous message passing, as opposed to a request-response architecture. In asynchronous systems, message queues provide temporary storage when the destination program is busy or connectivity is compromised. In addition, most asynchronous middleware systems provide persistent storage to backup the message queue. The key benefit of an asynchronous message process is that if the receiver application fails for any reason, the senders can continue unaffected; the sent messages simply accumulate in the message queue for later processing when the receiver restarts. NOTE: Salesforce provides only explicit queuing capability in the form of workflow-based outbound messaging. To provide true message queueing for other integration scenarios (including orchestration, process choreography, quality of service, and so on) a middleware solution is required.
What key integration patterns should be considered for System to Salesforce?
Remote Call-In: For Synchronous or Asynchronous Process Integration or for Synchronous Data Integration Batch Data Synchronization: For Asynchronous Data Integration
What key integration patterns should be considered for Salesforce to System?
Remote Process Invocation—Request and Reply: For Synchronous Process Integration or Synchronous Data Integration Remote Process Invocation—Fire and Forget: For Asynchronous Process Integration UI Update Based on Data Changes: For Asynchronous Data Integration Data Virtualization: For Synchronous Virtual Integration
What is Routing?
Routing can be defined as specifying the complex flow of messages from component to component. In modern services-based solutions, such message flows can be based on a number of criteria, including header, content type, rule, and priority. NOTE: With Salesforce integrations, it's assumed that any such requirements are met by either the middleware layer or the endpoint. Message routing can be coded in Apex, but we don't recommend it due to maintenance and performance considerations.
Remote Process Invocation—Fire and Forget: State Management - When integrating systems, unique record identifiers are important for ongoing state tracking. For example, if a record is created in the remote system, you have two options. Salesforce stores the remote system's primary or unique surrogate key for the remote record. The remote system stores the Salesforce unique record ID or some other unique surrogate key.
Salesforce - The remote system must store either the Salesforce RecordId or some other unique surrogate key in the Salesforce record. Remote system - Salesforce must store a reference to the unique identifier in the remote system. Because the process is asynchronous, storing this unique identifier can't be part of the original transaction. Salesforce must provide a unique ID in the call to the remote process. The remote system must then call back to Salesforce to update the record in Salesforce with the remote system's unique identifier, using the Salesforce unique ID. The callback implies specific state handling in the remote application to store the Salesforce unique identifier for that transaction to use for the callback when processing is complete, or the Salesforce unique identifier is stored on the remote system's record.
Batch Data Synchronization: Best Solution when Salesforce is the Data Master = Salesforce Change Data Capture
Salesforce Change Data Capture publishes change events, which represent changes to Salesforce records. Changes include creation of a new record, updates to an existing record, deletion of a record, and undeletion of a record. With Change Data Capture, you can receive near-real-time changes of Salesforce records, and synchronize corresponding records in an external data store. Change Data Capture takes care of the continuous synchronization part of replication. It publishes the deltas of Salesforce data for new and changed records. Change Data Capture requires an integration app for receiving events and performing updates in the external system.
What is Data Virtualization?
Salesforce accesses external data in real time. This removes the need to persist data in Salesforce and then reconcile the data between Salesforce and the external system.
Remote Process Invocation—Request and Reply: Salesforce Lightning—Lightning component or page initiates an Apex SOAP or REST callout in a synchronous manner. Salesforce Classic—A custom Visualforce page or button initiates an Apex SOAP callout in a synchronous manner. If the remote endpoint poses a risk of high latency response (refer to latest limits documentation for the applicable limits here), then an asynchronous callout, also called a continuation, is recommended to avoid hitting synchronous Apex transaction governor limits.
Salesforce enables you to consume a WSDL and generate a resulting proxy Apex class. This class provides the necessary logic to call the remote service. Salesforce also enables you to invoke HTTP (REST) services using standard GET, POST, PUT, and DELETE methods. A user-initiated action on a Visualforce page or Lightning page then calls an Apex controller action that then executes this proxy Apex class to perform the remote call. Visualforce pages and Lightning pages require customization of the Salesforce application.
Remote Process Invocation—Request and Reply: A custom Visualforce page or button initiates an Apex HTTP callout in a synchronous manner.
Salesforce enables you to invoke HTTP services using standard GET, POST, PUT, and DELETE methods. You can use several HTTP classes to integrate with RESTful services. It's also possible to integrate to SOAP-based services by manually constructing the SOAP message. The latter is not recommended because it's possible for Salesforce to consume WSDLs to generate proxy classes. A user-initiated action on a Visualforce page then calls an Apex controller action that then executes this proxy Apex class to perform the remote call. Visualforce pages require customization of the Salesforce application.
What is Remote Process Invocation—Fire and Forget?
Salesforce invokes a process in a remote system but doesn't wait for completion of the process. Instead, the remote process receives and acknowledges the request and then hands off control back to Salesforce.
What is Remote Process Invocation—Request and Reply?
Salesforce invokes a process on a remote system, waits for completion of that process, and then tracks state based on the response from the remote system.
Batch Data Synchronization: Sketch - The following diagram illustrates the sequence of events in this pattern, where Salesforce is the data master.
Salesforce is the data master—Salesforce is the system of record for certain entities and Salesforce Change Data Capture client applications can be informed of changes to Salesforce data. For an ETL tool to gain maximum benefit from data synchronization capabilities, consider the following: Chain and sequence the ETL jobs to provide a cohesive process. Use primary keys from both systems to match incoming data. Use specific API methods to extract only updated data. If importing child records in a master-detail or lookup relationship, group the imported data using its parent key at the source to avoid locking. For example, if you're importing contact data, be sure to group the contact data by the parent account key so the maximum number of contacts for a single account can be loaded in one API call. Failure to group the imported data usually results in the first contact record being loaded and subsequent contact records for that account to fail in the context of the API call. Any post-import processing, such as triggers, should only process data selectively. If your scenario involves large data volumes, follow the best practices in the white paper Best Practices for Deployments with Large Data Volumes.
Remote Process Invocation—Fire and Forget: Solution Variant—Outbound Messaging and Message Sequencing
Salesforce outbound messaging can't guarantee the sequence of delivery for its messages because a single message can be retried over a 24-hour period. Multiple methods for handling message sequencing in the remote system are available. Salesforce sends a unique message ID for each instance of an outbound message. The remote system discards messages that have a duplicate message ID. Salesforce sends only the RecordId. The remote system makes a callback to Salesforce to obtain the necessary data to process the request.
Remote Process Invocation—Fire and Forget: Solution Variant—Outbound Messaging and Deletes
Salesforce workflow rules can't track deletion of a record. The rules can track only the insert or update of a record. Therefore, you can't directly initiate an outbound message from the deletion of a record. You can initiate a message indirectly with the following process. Create a custom object to store key information from the deleted records. Create an Apex trigger, fired by the deletion of the base record, to store information, such as the unique identifier in the custom object. Implement a workflow rule to initiate an outbound message based on the creation of the custom object record. It's important that state tracking is enabled by storing the remote system's unique identifier in Salesforce or Salesforce's unique identifier in the remote system.
Remote Process Invocation—Fire and Forget: Good Solution - Customization-driven platform events
Similar to process-driven platform events, but the events are created by Apex triggers or classes. You can publish and consume platform events by using Apex or an API. Platform events integrate with the Salesforce platform through Apex triggers. Triggers are the event consumers on the Salesforce platform that listen to event messages. When an external app uses the API or a native Salesforce app uses Apex to publish the event message, a trigger on that event is fired. Triggers run the actions in response to the event notifications.
Source/Target for Integration
Specifies the requester of the integration transaction along with the target(s) that provide the information. Note that the technical capabilities of the source and target systems, coupled with the type and timing of the integration, may require an additional middleware or integration solution.
What is Synchronous transport protocols?
Synchronous transport protocols refer to protocols that support activities wherein a single thread in the caller sends the request message, blocks to wait for the reply message, and then processes the reply....The request thread awaiting the response implies that there is only one outstanding request or that the reply channel for this request is private for this thread.
Specify the blocking (or non-blocking) nature of the integration
Synchronous—Blocking or "near-real-time" requests can be defined as "request/response operation, and the result of the process is returned to the caller immediately via this operation. Asynchronous—Non-blocking, queue, or message-based requests are "invoked by a one-way operation and the results (and any faults) are returned by invoking other one-way operations. The caller therefore makes the request and continues, without waiting for a response.
What is UI Update Based on Data Changes?
The Salesforce user interface must be automatically updated as a result of changes to Salesforce data.
Batch Data Synchronization: Middleware Capabilities - The most effective external technologies used to implement this pattern are traditional ETL tools. It's important that the middleware tools chosen support the Salesforce Bulk API. It's helpful, but not critical, that the middleware tools support the getUpdated() function. This function provides the closest implementation to standard change data capture capability on the Salesforce platform.
The following table highlights the desirable properties of a middleware system that participates in this pattern. Event handling - Desirable Protocol conversion - Desirable Translation and transformation - Mandatory Queuing and buffering - Mandatory Synchronous transport protocols - Not required Asynchronous transport protocols - Mandatory Mediation routing - Desirable Process choreography and service orchestration - Mandatory Transactionality (encryption, signing, reliable delivery, transaction management) - Desirable Routing - Desirable Extract, transform, and load - Mandatory Long Polling - Mandatory (required for Salesforce Change Data Capture)
Batch Data Synchronization: Forces
There are various forces to consider when applying solutions based on this pattern: Should the data be stored in Salesforce? If not, there are other integration options an architect can and should consider (mashups, for example). If the data should be stored in Salesforce, should the data be refreshed in response to an event in the remote system? Should the data be refreshed on a scheduled basis? Does the data support primary business processes? Are there analytics (reporting) requirements that are impacted by the availability of this data in Salesforce?
Remote Process Invocation—Fire and Forget: Example - A telecommunications company wants to use Salesforce as a front end for creating accounts using the lead-to-opportunity process. An order is created in Salesforce when the opportunity is closed and won, but the back-end ERP system is the data master. The order must be saved to the Salesforce opportunity record, and the opportunity status changed to indicate that the order was created. The following constraints apply. No custom development in Salesforce. You don't require immediate notification of the order number after the opportunity converts to an order. The organization has an ESB that supports the CometD protocol and is able to subscribe to platform events.
This example is best implemented using Salesforce platform events, but does require that the ESB subscribes to the platform event. On the Salesforce side: Create a Process Builder process to initiate the platform event (for example, when the opportunity status changes to Close-Won). Create a new platform event which publishes the opportunity details. On the remote system side: The ESB subscribes to the Salesforce platform event using the CometD protocol. The ESB receives one or more notifications indicating that the opportunity is to be converted to an order. The ESB forwards the message to the back-end ERP system so that the order can be created. After the order is created in the ERP system, a separate thread calls back to Salesforce using the SessionId as the authentication token. The callback updates the opportunity with the order number and status. You can do this callback using documented pattern solutions, such as Salesforce platform events, Salesforce SOAP API, REST API, or an Apex web service. This example demonstrates the following. Implementation of a remote process invoked asynchronously. End-to-end guaranteed delivery. Subsequent callback to Salesforce to update the state of the record
Remote Process Invocation—Request and Reply: Data Volumes
This pattern is used primarily for small volume, real-time activities, due to the small timeout values and maximum size of the request or response for the Apex call solution. Don't use this pattern in batch processing activities in which the data payload is contained in the message.
Remote Process Invocation—Request and Reply: A utility company uses Salesforce and has a separate system that contains customer billing information. They want to display the billing history for a customer account without storing that data in Salesforce. They have an existing web service that returns a list of bills and the details for a given account, but can't display this data in a browser.
This requirement can be accomplished with the following approach: Salesforce consumes the billing history service WSDL from an Apex proxy class. Execute the Apex proxy class with the account number as the unique identifier by creating a Lightning component and a custom controller or a Visualforce page and custom controller. The custom controller then parses the return values from the Apex callout and the Lightning component or Visualforce page, and then renders the customer billing data to the user. This example demonstrates that: The state of the customer is tracked with an account number stored on the Salesforce account object. Subsequent processing of the reply message by the caller
Remote Process Invocation—Fire and Forget: Suboptimal Solution - Custom Lightning component or Visualforce page that initiates an Apex SOAP or HTTP asynchronous callout
This solution is typically used in user interface-based scenarios, but does require customization. In addition, the solution must handle guaranteed delivery of the message in the code. Similar to the solution for the Remote Process Invocation—Request and Reply pattern solution that specifies using a Visualforce page or Lightning component, together with an Apex callout. The difference is that in this pattern, Salesforce doesn't wait for the request to complete before handing off control to the user. After receiving the message, the remote system responds and indicates receipt of the message, then asynchronously processes the message. The remote system hands control back to Salesforce before it begins to process the message; therefore, Salesforce doesn't have to wait for processing to complete.
Remote Process Invocation—Fire and Forget: Timeliness
Timeliness is less of a factor with the fire-and-forget pattern. Control is handed back to the client either immediately or after positive acknowledgment of a successful hand-off to the remote system. With Salesforce outbound messaging, the acknowledgment must occur within 24 hours (this can be extended to seven days); otherwise, the message expires. For platform events, Salesforce sends the events to the event bus and doesn't wait for a confirmation or acknowledgment from the subscriber. If the subscriber doesn't pick up the message, the subscriber can request to replay the event using the event reply ID. High-volume event messages are stored for 72 hours (three days). To retrieve past event messages, use CometD clients to subscribe to a channel.
Remote Process Invocation—Request and Reply: Timeliness
Timeliness is of significant importance in this pattern. Usually: The request is typically invoked from the user interface, so the process must not keep the user waiting. Salesforce has a configurable timeout of up to 120 seconds for calls from Apex. Completion of the remote process is executed in a timely manner to conclude within the Salesforce timeout limit and within user expectations. External calls are subject to Apex synchronous transaction governor limits, so care should be taken to mitigate the risk of instantiating more than 10 transactions that run for more than five seconds each. In addition to ensuring the external endpoint is performant, options to mitigate the risk of a timeout include: Setting the timeout of the external callout to five seconds. Using a continuation in Visualforce or Lightning Components to handle long-running transactions
Batch Data Synchronization: Timeliness
Timeliness isn't of significant importance in this pattern. However, care must be taken to design the interfaces so that all of the batch processes complete in a designated batch window. As with all batch-oriented operations, we strongly recommend that you take care to insulate the source and target systems during batch processing windows. Loading batches during business hours might result in some contention, resulting in either a user's update failing, or more significantly, a batch load (or partial batch load) failing. For organizations that have global operations, it might not be feasible to run all batch processes at the same time because the system might continually be in use. Data segmentation techniques using record types and other filtering criteria can be used to avoid data contention in these cases.
What is Transactionality (encryption, signing, reliable delivery, transaction management)?
Transactionality can be defined as the ability to support global transactions that encompass all necessary operations against each required resource. Transactionality implies the support of all four ACID (atomicity, consistency, isolation, durability) properties, where atomicity guarantees all-or-nothing outcomes for the unit of work (transaction). NOTE: While Salesforce is transactional within itself, it's not able to participate in distributed transactions or transactions initiated outside of Salesforce. Therefore, it's assumed that for solutions requiring complex, multi-system transactions, transactionality (and associated roll-back/compensation mechanisms) be implemented at the middleware layer.
What is translation and transformation?
Transformation is the ability to map one data format to another to ensure interoperability between the various systems being integrated. Typically, this entails reformatting messages en route to match the requirements of the sender or recipient. In more complex cases, one application can send a message in its own native format, and two or more other applications might each receive a copy of the message in their own native format. Middleware translation and transformation tools often include the ability to create service facades for legacy or other non-standard endpoints; this allows those endpoints to appear to be service-addressable. NOTE: With Salesforce integrations, it's assumed that any such requirements are met by either the middleware layer or the endpoint. Transformation of data can be coded in Apex, but we don't recommend it due to maintenance and performance considerations.
Remote Process Invocation—Request and Reply: Apex batch classes
Used for invocation of remote processes in batch. For more information about this calling mechanism, see pattern Remote Process Invocation—Fire and Forget.
Remote Process Invocation—Request and Reply: Apex triggers
Used primarily for invocation of remote processes using Apex callouts from DML-initiated events. For more information about this calling mechanism, see pattern Remote Process Invocation—Fire and Forget.
Remote Process Invocation—Request and Reply: Enhanced External service embedded in a Lightning Flow or Lightning component or Visualforce and Apex controllers
Used when the remote process is triggered as part of an end-to-end process involving the user interface, and the result must be displayed or updated in a Salesforce record. For example, the submission of a credit card payment to an external payment gateway and the payment results are immediately returned and displayed to the user.
What is Virtual Integration Pattern?
Virtual Integration—The patterns in this category address the need for a user to view, search, and modify data that's stored in an external system. When you implement a solution for this type of integration, the triggering application has to call out to other applications and interact with their data in real time. This type of integration removes the need for data replication across systems, and means that users always interact with the most current data.
Remote Process Invocation—Fire and Forget: Problem
When an event occurs in Salesforce, how do you initiate a process in a remote system and pass the required information to that process without waiting for a response from the remote system?
Remote Process Invocation—Request and Reply: State Management
When integrating systems, keys are important for ongoing state tracking. There are two options. Salesforce stores the remote system's primary or unique surrogate key for the remote record. The remote system stores the Salesforce unique record ID or some other unique surrogate key. There are specific considerations for handling integration keys, depending on which system contains the master record: Salesforce - The remote system stores either the Salesforce RecordId or some other unique surrogate key from the record. Remote system - The call to the remote process returns the unique key from the application, and Salesforce stores that key value in a unique record field.
Batch Data Synchronization: State Management
You can implement state management by using surrogate keys between the two systems. If you need any type of transaction management across Salesforce entities, we recommend that you use the Remote Call-In pattern using Apex. Standard optimistic record locking occurs on the platform, and any updates made using the API require the user, who is editing the record, to refresh the record and initiate their transaction. In the context of the Salesforce API, optimistic locking refers to a process where: Salesforce doesn't maintain the state of a record being edited by a specific user. Upon read, it records the time when the data was extracted. If the user updates the record and saves it, Salesforce checks to see if another user has updated the record in the interim. If the record has been updated, the system notifies the user that an update was made and the user should retrieve the latest version of the record before proceeding with their updates.
Batch Data Synchronization: Sketch - The following diagram illustrates the sequence of events in this pattern, where the remote system is the data master.
You can integrate data that's sourced externally with Salesforce under the following scenarios: External system is the data master—Salesforce is a consumer of data provided by a single source system or multiple systems. In this scenario, it's common to have a data warehouse or data mart that aggregates the data before the data is imported into Salesforce. The ETL tool is then used to create programs that will: Read a control table to determine the last run time of the job and extract any other control values needed. Use the above control values as filters and query the source data set. Apply predefined processing rules, including validation, enrichment, and so on. Use available connectors/transformation capabilities of the ETL tool to create the destination data set. Write the data set to Salesforce objects. If processing is successful, update the control values in the control table. If processing fails, update the control tables with values that enable a restart and exit. We recommend that you create the control tables and associated data structures in an environment that the ETL tool has access to even if access to Salesforce isn't available. This provides adequate levels of resilience. Salesforce should be treated as a spoke in this process and the ETL infrastructure is the hub.
Remote Process Invocation—Request and Reply: A batch Apex job performs an Apex SOAP or HTTP callout in a synchronous manner.
You can make calls to a remote system from a batch job. This solution allows batch remote process execution and processing of the response from the remote system in Salesforce. However, a given batch has limits to the number of calls. For more information, see Governor Limits. A given batch run can execute multiple transaction contexts (usually in intervals of 200 records). The governor limits are reset per transaction context.
Remote Process Invocation—Fire and Forget: Suboptimal Solution - Trigger that's invoked from Salesforce data changes performs an Apex SOAP or HTTP asynchronous callout
You can use Apex triggers to perform automation based on record data changes. An Apex proxy class can be executed as the result of a DML operation by using an Apex trigger. However, all calls made from within the trigger context must be executed asynchronously.
Remote Process Invocation—Request and Reply: A trigger that's invoked from Salesforce data changes performs an Apex SOAP or HTTP callout in a synchronous manner.
You can use Apex triggers to perform automation based on record data changes. An Apex proxy class can be executed as the result of a DML operation by using an Apex trigger. However, all calls made from within the trigger context must execute asynchronously from the initiating event. Therefore, this solution isn't recommended for this integration problem. This solution is better suited for the Remote Process Invocation—Fire and Forget pattern.
Remote Call-In: Context and Problem
You use Salesforce to track leads, manage your pipeline, create opportunities, and capture order details that convert leads to customers. However, Salesforce isn't the system that contains or processes orders. Orders are managed by an external (remote) system. That remote system needs to update the order status in Salesforce as the order passes through its processing stages. How does a remote system connect and authenticate with Salesforce to notify Salesforce about external events, create records, and update existing records?
Remote Process Invocation—Fire and Forget: Context
You use Salesforce to track leads, manage your pipeline, create opportunities, and capture order details that convert leads to customers. However, Salesforce isn't the system that holds or processes orders. After the order details are captured in Salesforce, an order must be created in the remote system, which manages the order through to its conclusion. When you implement this pattern, Salesforce calls the remote system to create the order, but doesn't wait for the call's successful completion. The remote system can optionally update Salesforce with the new order number and status in a separate transaction.
Batch Data Synchronization: Context and Problem
You're moving your CRM implementation to Salesforce and want to: Extract and transform accounts, contacts, and opportunities from the current CRM system and load the data into Salesforce (initial data import). Extract, transform, and load customer billing data into Salesforce from a remote system on a weekly basis (ongoing). Extract customer activity information from Salesforce and import it into an on-premises data warehouse on a weekly basis (ongoing). How do you import data into Salesforce and export data out of Salesforce, taking into consideration that these imports and exports can interfere with end-user operations during business hours, and involve large amounts of data?