ExecuteMultipleRequest and ExecuteTransactionRequest in Dynamics 365 CE

There are scenarios in which you will be integrating with the Common Data Service from outside the platform. This could be a console application designed to create records or an integration process designed to send data from an external system into Dynamics 365 CE. Creating, Updating or Deleting data on a single record basis could adversely affect performance and reach your organization’s API limits.

The API limits only apply to online instances. Due to the shared nature of the online hosts, the limits are put in place by Microsoft to ensure the stability of the online platform for all customers. Microsoft limits the number of concurrent requests as well as limiting the number of requests per user. The request per user limit is 60,000 API calls in five minutes, subject to change. Microsoft doesn’t specify the number of concurrent requests, but the ExecuteMultipleRequest does state that it is limited to two concurrent connections. If the API limits are exceeded, a FaultException is returned.

There are two useful messages in the SDK that reduce the number of API calls by executing requests in bulk. ExecuteMultipleRequest and ExecuteTransactionRequest both execute batch requests against the API. Both messages have a batch size limit of 1000 calls and two concurrent connections, so you may need to use multiple batches to complete your calls. An ExecuteTransactionRequest will execute all calls in a single transaction, rolling back if an error has occurred. ExecuteMultipleRequest is not in a transaction and it will not rollback previously completed requests if an error occurs. ExecuteMultipleRequests can be configured to either stop processing when the first error is received or continue processing, keeping a record of each failed call.

Note that an ExecuteMultipleRequest is allowed to contain one or more ExecuteTransactionRequest calls, but an ExecuteTransactionRequest cannot contain any ExecuteMultipleRequests. ExecuteMultipleRequests cannot include another ExecuteMultipleRequest nor can an ExecuteTransactionRequest include another ExecuteTransactionRequest. Neither of these messages should be used in plugins or custom workflow activities, as they will adversely affect performance and could trigger the two-minute channel timeout exception.

Microsoft guidance on when to use ExecuteMultipleRequest or ExecuteTransactionRequest is:

  • Use ExecuteMultipleRequest to bulk load data or external processes that are intentional about executing long-running operations (greater than two minutes).
  • Use ExecuteMultipleRequest to minimize the round trips between custom client and Dynamics 365 servers, thereby reducing the cumulative latency incurred.
  • Use ExecuteTransactionRequest for external clients that require the batch of operations to be committed as a single, atomic database transaction or rollback if any exception is encountered. Be aware of the potential for database blocking for the duration of the long-running transaction.

Using ExecuteMultipleRequest and ExecuteTransactionRequest are both good methods for reducing the number of API calls as well as increasing the performance of external calls. Neither is meant to be used from within plugins nor custom workflow activities. To learn how to use these methods, and to view sample code, use the Microsoft Docs references listed below.

Overview of Dynamics 365 for Customer Service

My study of MB-900 is continuing with an overview of Dynamics 365 for Customer Service. Where Dynamics 365 for Sales was focused on helping the sales associates build stronger and lasting relationships with customers, Dynamics 365 for Customer Service is focused on providing flexibility and a consistent level of service for customers. Support agents will be able to use Dynamics 365 for Customer Service to better optimize their time and deliver the right answers to the customer quicker.

As with Dynamics 365 for Sales, Dynamics 365 for Customer Service contains customizable interactive dashboards. These dashboards will keep track of support queues, knowledge articles, and current cases. Support agents can use these dashboards to quickly and dynamically determine which are the highest priority cases to follow up on. Once the case is identified and opened, a customized case resolution process will help them find the tools and information that the support agent will need in order to properly assist the customer.

Customer Service diagram from Microsoft Learn
Customer Service diagram from Microsoft Learn

In Dynamics 365 for Customer Service, a customer has an entitlement, also known as a support contract. This entitlement defines the level of support the customer can receive over a one-year period of time. The entitlement can include the number of support cases that can be opened as well as how long it takes for a support agent to respond to the case.

An omnichannel support solution is provided by Dynamics 365 for Customer Service. This gives the customer multiple different methods of interacting with the service center each with a consistent experience, regardless of the method they choose. These methods include online customer support portals that can be accessed from a computer or mobile device; integrated chat that has a service bot answering basic questions and recording them for the support agent to take over; email support queues; and phone calls direct to the service center.

When customers interact with the portal, online chat, or email the service center, a case is automatically created in Dynamics 365 for Customer Service. This case is compared to the customer’s entitlement to determine which queue the case is assigned. From that point, a support agent that has access to the queue can pick up the case and start working it. That support agent is then responsible for working the case until it is resolved. Other support agents will be able to see that the case is being handled, and can pick up the next case in the queue.

The support agent is given several tools to assist them in resolving the case. A case resolution process is automatically assigned to the case, based on the type of case. This process is customized by the business and contains several stages, each with multiple steps to be completed in order to resolve the case. Integrated knowledge base articles are offered to the support agent and can be emailed to the customer with possible resolutions.

Once the customer confirms that their problem has been resolved, the support agent is able to close the case as resolved. With the case resolved, the entitlement is updated. If the entitlement includes a specified number of support cases available to the customer, that number is decremented by one to reflect the number of cases they have remaining.

Demo of the customer support process in Dynamics 365 for Customer Service.

Demo of the customer support portal in Dynamics 365 for Customer Service.

Using Extension Methods in Dynamics 365 CE Plugins

In my previous blog post, Extension Methods in the .NET Framework, I covered how to use extension methods in C# and VisualBasic to extend the functionality of base types. Using extension methods can also be useful when writing plugins for Dynamics 365 Customer Engagement (Dynamics CRM). I create my own library of extension methods that can be used as needed for any project I’m currently working on. The most common types of extensions for me involves the tracing service, the organization service, and the entity.

I have two types of tracing that I perform. The first is standard trace messages that are always logged to the plugin trace logs. The second is a debug-only trace message that only writes to the plugin trace log if the assembly is compiled in Debug as opposed to Release. As a side comment, it is always preferable to have your assemblies compiled in Release mode when deployed to a production ready environment, to improve plugin performance. This makes using the #if DEBUG preprocessing directive more effective.

Without an extension message, sending a message to the plugin tracing log is done by calling the Trace method of Microsoft.Xrm.Sdk.ITracingService.

using Microsoft.Xrm.Sdk;

public namespace Sample.Tracing
{
    public class PluginTest : IPlugin
    {
        public void Execute(IServiceProvider serviceProvider)
        {
            ITracingService tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService));
            tracingService.Trace("Entering plug-in execution.");
            tracingService.Trace("Some debug information is logged to the plugin trace log.");
            // Additional code goes here
        }
    }
}

The standard way of tracing is effective on its own but is still missing some elements that make it useful. As mentioned previously, you may only want certain tracing calls to be done if you are actively debugging your code. Prepending every call to the trace log with a time stamp is also beneficial when trying to determine where plug-in performance is being adversely impacted. You can also parse your error object to output more detailed error messages directly to the plugin trace logs. For the sample below, I’m just going to demonstrate how we can expand the tracing log to handle debug only messages and time-stamped messages.

using Microsoft.Xrm.Sdk;
using System;

namespace Sample.SdkExtensions
{
    public static class ExtITracingService
    {
        public static void DebugMessage(this ITracingService tracingService, string format, params object[] args)
#if DEBUG
            => tracingService.LogMessage(format, args);
#else
            { }
#endif
    }

    public static void LogMessage(this ITracingService tracingService, string format, params object[] args)
    {
        string message = (args == null || args.Length == 0)
            ? format
            : String.Format(format, args);
        tracingService.Trace("{0}: {1}", DateTime.Now.ToString("o"), message);
    }
}

The DebugMessage extension method shows how to use the preprocessing directive to only execute when running in Debug mode. If the code is running in Debug, then it will call the LogMessage extension method. If it isn’t, then the method doesn’t do anything at all. This will allow us to have more detailed messages when running the assembly in Debug, and better performance when running the assembly in Release. The LogMessage extension method will create the appropriate message string from the format and optional arguments and pass that directly over to the ITracingService, prepended with a date and time stamp.

Now, we can update the plugin code to use our new extension methods.

using Sample.SdkExtensions;
using Microsoft.Xrm.Sdk;

public namespace Sample.Tracing
{
    public class PluginTest : IPlugin
    {
        public void Execute(IServiceProvider serviceProvider)
        {
            ITracingService tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService));
            tracingService.LogMessage("Entering plug-in execution.");
            tracingService.DebugMessage("Some debug information is logged to the plugin trace log.");
            // Additional code goes here
        }
    }
}

As you can see, once we have our extension method in the project, we can reference the namespace for it in our using statements. At that point, our DebugMessage and LogMessage extension methods are able to be used. Now, debug messages will only be added to the plugin tracing log if the assembly is compiled as Debug, and both messages will be prepended with the appropriate time stamp. These extension methods increase the readability and usability of the code, giving consistent logging functionality no matter where the calls are made in the code.

Extending the entity is another area that it is useful to create extension methods. Getting and setting of attribute values often requires the same several lines of code to retrieve the appropriate information. Examine the code below using the standard SDK calls to retrieve the primary contact Id from one account, and set it on another. While it is preferable to retrieve the Entity Reference in this type of scenario, I am writing in this fashion to show the value in reducing code complexity.

Guid entityReferenceId = account1.Contains("primarycontact")
    ? ((EntityReference)account1["primarycontact"]).Id
    : Guid.Empty;

if (entityReferenceId == Guid.Empty)
{
    if (account2.Contains("primarycontact"))
    {
        account2["primarycontact"] = null;
    }
}
else
{
    if (account2.Contains("primarycontact"))
    {
        account2["primarycontact"] = new EntityReference("contact", entityReferenceId);
    }
    else
    {
        account2.Attributes.Add("primarycontact") = new EntityReference("contact", entityReferenceId);
    }
}

There are several lines of code required to perform a simple get and set operation. These operations can be created as extension methods on Microsoft.Xrm.Sdk.Entity as we did for the tracing service. I won’t be putting the extension methods here, instead, I will show how we can use two new extension methods GetEntityReferenceId and SetEntityReference to increase the readability of the plugin code.

Guid entityReferenceId = account1.GetEntityReferenceId("primarycontact");
account2.SetEntityReference("contact", entityReferenceId);

As you can see, the extension methods now make it easier to set and get values from the entity image. As mentioned previously, this increases readability and reusability of the code, making debugging easier. Just be aware that if you are setting default values (such as Guid.Empty for Guid values) you’ll want to have additional checks in your code to ensure that you are working with the proper data.

The final extension method I will touch on in this entry is related to Microsoft.Xrm.Sdk.IOrganizationService. In my previous blog post, I mentioned that it is strongly discouraged from overloading the methods of existing types. In the case of extending the organization service, I am intentionally ignoring that advice. In this case, my extension methods are simply shortcuts that are passed to the primary call, and even if Microsoft later adds their own overloads that mirror what I have done, it won’t break my existing functionality. This is an exception to the recommendation that still needs to be handled with care.

// Global Values
EntityReference entityReference = new EntityReference("account", Guid.Parse("608DA112-2681-4967-B30E-A4132321010A")); // Normally will be retrieved from an entity.
string[] columnNames= new string[] { ..list of fields.. };

// Default SDK Retrieve call.
service.Retrieve(entityReference.LogicalName, entityReference.Id, new ColumnSet(columnNames));

// Overloaded extension Retrieve call.
service.Retrieve(entityReference, columnNames);

// The overload in from the extension static class
public static Entity Retrieve(this IOrganizationService service, EntityReference entityReference, string[] columnNames)
    => service.Retrieve(entityReference.LogicalName, entityReference.Id, new ColumnSet(columnNames));

As you can see, the overload is simply calling the SDK Retrieve method with the appropriate parameter values. No additional validation is being done. It simply allows for a shorter method call to simplify the code.

As mentioned in the previous article, there is no consensus on how often to use extension methods. Microsoft does recommend using “sparingly and only when you have to.” It could be argued that the SDK methods are simple enough and don’t fall under “when you have to.” In my case, I prefer having multiple overloads, as I find that it makes debugging the code easier. My methods become shorter and easier to read. I have additional tracing methods, beyond what is shown in this blog post, to give more details in the plugin trace log. The ultimate goal for me is the maintainability, reusability, and readability of my code. Using extension methods with the SDK achieves that for me.

Extension Methods in the .NET Framework

Extension methods were first introduced in version 3.5 of the .NET Framework. They allow for easily extending a type without having to recompile or modify the original type. This is useful for extending types of which you don’t own the source code. By implementing an extension method in your code, you can call the method as if it is a native method for the type.

There are differing opinions on when to use extension methods as opposed to other coding styles. Microsoft’s recommended general guidelines state: “We recommend that you implement extension methods sparingly and only when you have to. Whenever possible, client code that must extend an existing type should do so by creating a new type derived from the existing type… For a class library that you implemented, you shouldn’t use extension methods to avoid incrementing the version number of an assembly. If you want to add significant functionality to a library for which you own the source code, you should follow the standard .NET Framework guidelines for assembly versioning.”

Inheritance, as recommended by Microsoft, is always a preferred method for extending and building upon base type functionality. Let’s say you have a base type named Animal. Animal has two methods: Walk() and Sleep(). Now, you have a dog, which is an animal, and you want that dog to bark. You don’t want that method to be on Animal, as not all animals bark. Instead, you will inherit the Animal type and create a new method called Bark(). What if you want your animal to eat? All animals eat, so you don’t want to inherit from Animal, instead, you want to modify the base type to add the extension method of Eat(). By doing this, your derived type of Dog, and any other derived type can use the newly created Eat extension method. Note that some classes are marked as sealed (NotInheritable in VB), meaning you cannot inherit to generate a new class, so extension methods can be used to extend the existing sealed functionality.

Extension methods cannot be used to override an existing method that has the exact same signature. For example, every type has a ToString() method. If you create an extension method of ToString(), the extension method will not be called, but the base type’s method will be called instead. On the other hand, if you create an extension method called ToString(string), your extension method will be called through the overloaded method, provided there isn’t another overloaded method with the same signature.

Another thing to be aware of when implementing extension methods on existing types is the possibility that other extensions can be written against the same types and have the same signature. This will result in a compile error stating that the call is ambiguous. This issue was encountered by the developers of NDepend and written about in their blog post “A problem with extension methods.” Similarly, if the base type is later updated to include a method of the same signature, your extension method will no longer be called. Using the Animal example earlier, if the base type was updated to include the missing Eat() method, then the custom extension method of Eat() will no longer be called, and errors may occur.

All types in the .NET Framework inherit from System.Object. For this reason, it is strongly advised not to add extension methods to System.Object. Adding an extension to System.Object could have inconsistent behavior and increase the possibility of errors when an existing type method is called instead of the extension method. For the same reason, it is strongly advised not to overload an existing method with a different signature, such as the ToString(string) example described previously.

Extension methods are implemented in static classes in C# or in Modules in VB. Inside the class in C#, you will create a static method that contains a first parameter of the type being extended, prefixed with the this modifier. Inside the Module in VB, you will add the method definition attribute of <extension()> to the Sub as well as include the type as the first parameter. The Extension() attribute requires that the VB file Imports System.Runtime.CompilerServices. One additional difference between C# and VB is that VB allows you to pass your type ByRef or ByVal, where C# passes as a val only. Despite this, it is not recommended to pass types ByRef, to reduce code complexity and the chance for errors.</extension()>

The following example, from Microsoft Docs C# Programming Guide, implements an extension method that will count the number of words contained in a String.

using System.Linq;
using System.Text;
using System;

namespace CustomExtensions
{
    // Extension methods must be defined in a static class.
    public static class StringExtension
    {
        // This is the extension method.
        // The first parameter takes the "this" modifier
        // and specifies the type for which the method is defined.
        public static int WordCount(this String str)
        {
            return str.Split(new char[] {' ', '.','?'}, StringSplitOptions.RemoveEmptyEntries).Length;
        }
    }
}

namespace Extension_Methods_Simple
{
    // Import the extension method namespace.
    using CustomExtensions;
    class Program
    {
        static void Main(string[] args)
        {
            string s = "The quick brown fox jumped over the lazy dog.";
            // Call the method as if it were an 
            // instance method on the type. Note that the first
            // parameter is not specified by the calling code.
            int i = s.WordCount();
            System.Console.WriteLine("Word count of s is {0}", i);
        }
    }
}

The example below, from Microsoft Docs VB Programming Guide, implements an extension that will print the output to a console line and append the appropriate punctuation.

' Declarations will typically be in a separate module.  
Imports System.Runtime.CompilerServices  
  
Module StringExtensions  
    <Extension()>   
    Public Sub PrintAndPunctuate(ByVal aString As String,   
                                 ByVal punc As String)  
        Console.WriteLine(aString & punc)  
    End Sub  
  
End Module  

' Import the module that holds the extension method you want to use,   
' and call it.  
  
Imports ConsoleApplication2.StringExtensions  
  
Module Module1  
  
    Sub Main()  
        Dim example = "Hello"  
        example.PrintAndPunctuate("?")  
        example.PrintAndPunctuate("!!!!")  
    End Sub  
  
End Module

Overview of Microsoft Relationship Sales

This blog post continues with the study related material for MB-900 and focuses on Microsoft Relationship Sales. Microsoft Relationship Sales is an additional module in Dynamics 365 which links LinkedIn Sales Navigator with Dynamics 365 for Sales by leveraging LinkedIn relationships and company information. This integration between LinkedIn and Dynamics 365 will allow sales associates access to powerful relationship information from within Dynamics 365 for Sales.

Unifying the Seller Experience from Microsoft Learn
Unifying the Seller Experience from Microsoft Learn

LinkedIn Sales Navigator integration allows users to see headlines, activities, news and other related information about their contact directly inside Dynamics 365. Users can see shared contacts in order to facilitate icebreakers and encourage introductions to new prospects. LinkedIn InMail and PointDrive can be shown as activities and the LinkedIn user images can be displayed directly inside Dynamics 365. All this information can then be used in Dynamics 365 sales playbooks and with the AI for sales to strengthen the relationships between the sales associate and their contacts.

Embedded Intelligence is included as part of Dynamics 365 for Sales and adds powerful features for helping to manage relationships with contacts, especially in conjunction with Microsoft Relationship Sales. This is a suite of features that works in conjunction with data stored in Dynamics 365 and Microsoft Exchange. These features include relationship assistant, email engagement and auto capture. Relationship assistant analyzes data to generate cards detailing recommended actions to take based on the current context of the user. Email engagement will help users to keep track of emails that are sent, viewed, responded to, as well as attachments and links viewed as activities. Auto capture analyzes relevant email addresses in Microsoft Exchange and tracks them from within Dynamics 365.

Embedded Intelligence Diagram from Microsoft Learn
Embedded Intelligence Diagram from Microsoft Learn

All the data collected from LinkedIn, Exchange, and Dynamics 365 can be uniquely analyzed with Dynamics 365 AI for Sales. The sales specific AI can analyze relationship information, predict lead and opportunity scores, analyze notes, and even offer talking points to customers and prospects.

Relationship Sales Video from Microsoft Learn

Overview of Dynamics 365 for Sales

Continuing with the study for MB-900 is an overview of Dynamics 365 for Sales. Dynamics 365 for Sales is built on top of Dynamics 365 Customer Engagement (CE) but is also available as an individual module at a lower cost per user. As the name implies, it is a sales focused application designed to help sales associates drive business and sales managers to dive deeper into the sales data generated.

Example Sales Process
Example Sales Process from Microsoft Learn

The image above, provided by Microsoft, shows an example of the typical sales process. The sales associate will start by qualifying leads. They contact the lead, gather information, and determine if the lead is a good fit, both for the potential customer and the organization as a whole. All actions, such as emails and phone calls, can be tracked against the lead. This information then becomes available to other sales associates and sales managers along the way. This process can be further enhanced by AI for Sales to create custom scoring models and analyze existing organization relationships to improve the likelihood of qualifying leads.

Once a lead is qualified, the lead can be converted to an opportunity. The lead can be linked to an existing contact and/or account. If one does not exist, it will be created and linked automatically during the qualification process. This begins the development cycle, where the opportunity is worked by determining what products and/or services are of interest to the customer. The data collected during this process can be tailored to meet the organizations unique business process flow. That data is then available to other team members on the opportunity record, increasing the chances of closing the opportunity. Like the lead, AI for Sales can be utilized to help increase the chances of closing the deal.

Once all the data is required and the products/services selected, a quote can be generated directly from the opportunity. This quote is sent as a proposal to the customer. The customer can then work with the sales associate to make any changes. Quotes are able to be revised and sent to the customer with the updated information. This feedback loop can happen several times through the life of the quote until the customer accepts the quote.

An accepted quote can automatically generate the sales order and the associated opportunity is closed as won. Integrations into other ERP systems can keep track of current inventory levels required to fulfill the sales order. Once the order is fulfilled, an invoice is generated in order to bill the customer. The invoice generation can come from Dynamics 365 for Sales, but it is not uncommon for it to come from the backend ERP system, such as Dynamics 365 Finance and Operations.

Video overview of Dynamics 365 for Sales from Microsoft Learn

While this process is similar among many organizations, it is by no means a unique process. Every organization has its own data points that are needed and their own strategies designed to qualify leads and close sales. Dynamics 365 for Sales allows for the creation of new fields and even new entities for gathering information. These new data points can be used in reports, shown on dashboards, and included anywhere in the sales process. With this data, whether from existing entities or custom entities, the industry and business requirements can be filled and the ability of the organization to drive business can be improved.

Introduction to Dynamics 365

As MB-900 is a test related to the fundamentals of Dynamics 365, it is essential to start with a basic introduction. Dynamics 365 is a collection of first-party business applications that are designed to help businesses simplify the complexity of CRM and ERP systems. They are modular business applications which work together on a single platform. Customer Engagement, Unified Operations, and Power Platform are the main areas of business applications for Dynamics 365. The individual business solutions are listed below:

  • Artificial Intelligence
  • Business Central
  • Customer Service
  • Field Service
  • Finance and Operations
  • Marketing
  • Mixed Reality
  • Project Service Automation
  • Retail
  • Sales
  • Talent

By leveraging one or more of these applications, organizations are able to engage customers more effectively, optimize business operations, empower employees to work better and more efficiently, as well as transform products and services by using data to find new opportunities and analyze customer data.

Understanding Cloud Concepts

One of the skills measured in “Exam MB-900: Microsoft Certified: Dynamics 365 Fundamentals” is understanding cloud concepts. It is important to understand cloud architecture as well as comparing cloud services and cloud offerings. Microsoft’s cloud offering is Azure, so it is essential to know how Azure fits. This blog post will share an overview of cloud computing and attempt to cover the topics that may relate to the exam.

Cloud Computing Overview

Microsoft provides the most straightforward overview of cloud computing.

Simply put, cloud computing is the delivery of computing services—servers, storage, databases, networking, software, analytics, intelligence and more—over the Internet (“the cloud”) to offer faster innovation, flexible resources, and economies of scale. You typically pay only for cloud services you use, helping lower your operating costs, run your infrastructure more efficiently, and scale as your business needs change.
What is cloud computing? A beginner’s guide.

There are many different activities related to cloud computing. An individual or company could be hosting a web page, using email, creating documents online, running applications over the internet, or even connecting to a virtual machine online. Cloud computing makes what we do on the internet every day possible. For businesses, it can also be a way to have maximum performance and scalability at a fraction of the cost of managing the infrastructure in-house.

Cloud Service Provider

Cloud service providers are those companies that offer various cloud services. They maintain the hardware infrastructure and the technical resources to support it. Payment for services is typically calculated based on usage. That allows the flexibility to expand or shrink usage of the cloud services to optimize expenses and maintain the reliability of services during peak times. Examples of cloud service providers are Amazon, Google, and Microsoft.

Virtual Machines

A virtual machine (VM) is an emulation of a computer running on another server. It has an operating system and virtualized hardware capable of managing multiple tasks. It is the most flexible of cloud offerings, as it gives complete control of the environment and applications running in that environment. The cloud service provider still maintains the hardware, networking, and security of the underlying infrastructure, while the user or company IT department manages the VM. The benefit of a virtual machine over a physical server is that a VM can be created quickly at a fraction of the cost required to set up a new computer.

Containers

A container is an isolated execution environment for applications. Unlike a virtual machine, there is no operating system in a container. Instead, a container holds the application and all its dependencies in an isolated process. Containers are smaller than VMs and can be started up much quicker, as there is no operating system to initialize. As a container has everything it needs for the application to execute, it can be passed between environments seamlessly.

Serverless Computing

Serverless computing differs from a VM or container in that application code is executed without the need to create or maintain a server. In this case, an action triggers a related application function. Automated tasks, such as sending of automatic email responses, is an ideal use of serverless computing. This functionality is often less costly than a VM or container, as the only charge relates to processing time. As each function is an isolated process, this is also the quickest way to deploy to the cloud.

Public Cloud

With the public cloud, there is no local hardware. Everything in the public cloud is managed and paid for by the cloud service provider. This type of service often charged on a pay-for-usage basis. It is the most common type of cloud service. Amazon Web Services, Google Cloud, and Microsoft Azure are examples of public clouds. The advantages of this type of service are pricing and service flexibility, without the need to maintain infrastructure. This type of cloud has disadvantages as well, such as compliance and security requirements that may not be met by a cloud service offering. Legacy applications may not necessarily be able to function in the public cloud space either.

Private Cloud

A private cloud replicates the functionality of a public cloud, except that infrastructure is typically not managed by a cloud service provider. Companies implementing a private cloud must purchase the hardware and hire the technical professionals to maintain that infrastructure. As this is an isolated cloud instance, it has the advantages of security and compliance that may not be possible in a public cloud. The most significant disadvantage to the private cloud is in terms of cost and flexibility. All the hardware is purchased up front, and if there are insufficient computing resources for the demand, additional hardware will need to be purchased.

Hybrid Cloud

The hybrid cloud is a combination of both the public and private cloud. A hybrid cloud is used in scenarios where the private cloud is necessary for security, compliance, legal, or legacy applications, yet other, less secure, applications can be executed with the scalability and flexibility of the public cloud. Many times this is also used as companies migrate from legacy on-premise infrastructure to the public cloud. This scenario, while sounding like the best of both worlds, could potentially be more complicated and more expensive to operate and maintain.

Government Cloud

Government agencies have specific security and compliance needs. Many government regulations and requirements are associated with the handling of sensitive data. These security and compliance needs are not able to be met in typical public cloud solutions. Separate isolated data centers and network infrastructures are required to meet the needs of government agencies in the government cloud. This infrastructure is designed to meet the stringent security and compliance policies that are needed.

On-Premise vs. Cloud

On-premise infrastructure is different from the private and public cloud. With on-premise, all the hardware is physically located where the company does business. On-premise infrastructure may or may not be accessible to the internet, depending on the security needs and requirements. All hardware must be purchased upfront, as well as hiring the technical resources to maintain the infrastructure. Private cloud takes the on-premise solution and moves it to the internet. The hardware is no longer located locally but instead stored in one or more data centers. Like an on-premise infrastructure, the hardware and technical resources to manage the hardware, are the responsibility of the company. For a private cloud, the underlying network is managed by the data center. Hybrid clouds take that to the next step, pushing more of the infrastructure into being managed by a cloud service provider in the public cloud. Hybrid clouds can communicate with both the public and private clouds for the company. Finally, the public cloud moves all infrastructure management to the cloud service provider, leaving the company to focus on their specific business requirements, and only pay for what is required and used.

Infrastructure as a Service

Infrastructure as a Service (IaaS) is the lowest level of the cloud computing stack. It provides complete control over the virtualized hardware and operating systems. Instead of buying the necessary hardware upfront, VMs are rented and paid for based on usage and virtual hardware requirements. The cloud service provider manages the underlying infrastructure. IaaS gives instant computing power over the internet that can be powered on and turned off quickly and easily.

Platform as a Service

Platform as a Service (PaaS) is the middle level of the cloud computing stack. It provides an environment for building, testing and deploying software applications. This type of service offers the necessary tools and resources for development and testing, without the need to worry about the underlying hardware, operating system, and security considerations of the platform. The focus for the user can remain on the application.

Software as a Service

Software as a Service (SaaS) is the top level of the cloud computing stack. It offers subscription-based pricing to online applications. An example of this would be Gmail and Outlook for email services or Office 365 for productivity software. Dynamics 365 is another example of a SaaS offering.

Cost Option Comparison

When it comes to determining the best cost options for comparison, it is essential to look at the capital expenditure (CapEx) as compared to the operational expenditure (OpEx). CapEx entails the upfront costs of doing business. The expenses include purchasing the necessary hardware infrastructure required to do business. OpEx doesn’t have the upfront cost of CapEx and is usually paid for over time. OpEx includes employee salaries and subscription services.

On-premise, private cloud and hybrid cloud all have significant CapEx costs associated with them. These costs include the necessary server hardware, storage needs, networking components, as well as backup and archiving fees. Also, there are OpEx costs associated with the technical employees that are required to manage the infrastructure, as well as the recurring costs for the network bandwidth requirements, software updates, and so on.

The public cloud doesn’t have the CapEx costs seen in on-premise or private cloud environments. Instead, you have OpEx costs of renting the services or paying the subscription licensing fees. As the cloud provider handles the maintenance of the infrastructure, it reduces the need for the technical employees, thus reducing the overall cost to the company.

Microsoft Azure

Azure is the cloud computing offering from Microsoft. It offers over 100 different cloud services to help businesses build and deploy their services to the cloud. It ranges from application hosting through virtual machines to run a variety of custom services and offerings. It is a global service, with data centers in over 50 regions. Also, it provides many security and compliance offerings to ensure that data is protected.

Resources

Microsoft Dynamics 365 Customer Engagement Certification

I started working with Microsoft Dynamics CRM 4.0 ten years ago. Five years ago I received my first Microsoft certifications for Dynamics CRM 2013. Times and technologies continue to grow, and with it, the offerings and skills needed to customize and develop on the platform correctly. What I knew as Microsoft Dynamics CRM, is now known as Microsoft Dynamics 365 Customer Engagement (CE), and is part of the broader Dynamics 365 suite of applications. To stay relevant with Microsoft’s offerings, I have to move beyond what I learned over the past ten years and begin to look into Azure, Power Platform, and the other products contained under the Dynamics 365 umbrella.

For this reason, I am studying to gain new certifications for Dynamics 365 CE and looking across the spectrum. The current exams I am studying for are MB-900, MB-200, MB-210, MB-220, MB-230, and MB-240. All these exams are geared towards functional consultants but are undoubtedly beneficial to software architects as well. I will be refreshing my knowledge of the entire suite of products, touching cloud computing, and looking at the Power Platform. As Microsoft introduces more exams geared toward developers, I will turn my focus to those exams as well. With this blog refresh, I plan on sharing my studies here. I even plan to create quizzes to test my knowledge, and to allow readers to test their own as well. I hope others will be able to benefit from this information and be able to find ways to pass it on to others in their sphere of influence.