Saturday, 22 April 2017

Salesforce Health Check Custom Baseline

Salesforce Health Check Custom Baseline

Introduction

The Salesforce Health Check has been around for a year or so now, debuting in the Spring 16 release of Salesforce (and bearing a striking resemblance to an app exchange listing with the same name).  The Salesforce Help topic gives chapter and verse on this so I’m not going to spend any time on the basic functionality, except to say that it’s a great tool for allowing you to see at a glance how your Salesforce org shapes up security-wise. There has been one caveat though, the baseline it is compared against is set by Salesforce not you, which means that if your security standard differs from the one true path you’ll see warnings and errors. As anyone who has accepted a unit test failure for more than one build knows, as soon as people expect errors they stop counting how many there are. Thus you may start out accepting a single warning, before you know it you have a number of potential security problems which are being ignored because “that page always shows errors”.

Custom Baselines

Spring 17 introduced the beta of custom baselines - this allows you to deviate from the Salesforce standard and supply your own baseline which reflects your security requirements. From now on if your Health Check page shows an error or exception, that means you have a real security issue and need to deal with it quickly.

While you could create a custom baseline from scratch, the easiest way is to export the standard baseline and amend it. Navigate to Setup -> Security Controls -> Health Check and click the gear icon, then ‘Export XML’ from the resulting context menu:

 

Screen Shot 2017 04 22 at 15 27 33

 

This downloads the baseline to a file named ‘baseline.xml’ (or baseline (1,2,3,etc).xml if you keep downloading it to the same place on a mac!), which you can then open in your favourite editor - I like Atom for XML files. Again, the Salesforce Help does a great job of explaining the format of the XML file so I’m not going to cover this. A couple of things to bear in mind:

  • You must change the Name and DeveloperName of the Baseline element, otherwise you’ll be trying to overwrite the standard, which you can’t do.
  • When you import the file, do it via the Lightning Experience. If you try this in class and you get an error you get no information that an error has occurred. According to the help “If your import fails, you receive a detailed message in Lightning Experience to help you resolve the problem”, which is pretty big talk when the actual message is Screen Shot 2017 04 22 at 16 03 16

Changing the Baseline

One area where my dev org is considered substandard is the password expiration time. I have my passwords set up never to expire, as forcing users to change their passwords regularly often results in them choosing predictable passwords that are easier to break. The Salesforce health check standard generates a Medium Risk alert if the value is over 90 days and a High Risk alert if the value is over 180 days.

Screen Shot 2017 04 22 at 15 40 22

Here’s the section of the file that configures this:

Screen Shot 2017 04 22 at 15 41 05

If I change the standard value to the numeric equivalent of Never Expires, 2147483647.0, and the warning to one higher:

Screen Shot 2017 04 22 at 15 57 54

and import the updated XML file using the context menu shown above, I can then switch my Health Check to the custom baseline and my password expiration is now at a satisfactory level:

Screen Shot 2017 04 22 at 16 05 10

I am not a security consultant

Notwithstanding the fact that forcing users to change their passwords regularly is out of favour in some places, you should not take this post as my advising you about your password policies in any shape or form. If you base your security settings on things that you read in random blog posts then best of luck to you - I did it in a dev org to show the functionality as there’s nothing that I really care about in there.

I’d expect the majority of custom baselines to be making the security standard more restrictive, in regulated industries for example, but what you should set up is a baseline that aligns with your corporate security policies.

Here comes the wish list

Anyone familiar with my blogs or Medium stories knows that I usually have a wish list around Salesforce functionality, so if any product managers are reading this, here’s what I’d like to see:

  • A way to email out the health check, run against a custom baseline, on a schedule. Security and compliance departments can receive this first thing in the morning and spend the day focusing on other systems.
  • Notifications when the health check result changes - if my Evil Co-Worker blags admin rights and changes the configuration to allow previous passwords to be re-used, I want to know about it. (Ideally I’d receive an automated report at the end of every day detailing everything the Evil Co-Worker has done, but that might be asking too much).
  • A way to snapshot the health check output regularly, so that I can see if an org is trending towards a more or less baseline compliant security setup. 
  • Custom entries - for example, I can easily spin through the ApexClass sobjects and figure out how many aren’t using ‘with sharing’. Security isn’t just about configuration, it’s also about code!

Related Posts

 

Saturday, 15 April 2017

Lightning Design System in Visualforce Part 3 - Built In SLDS

Lightning Design System in Visualforce Part 3 - Built In SLDS

Apexslds

Overview

In the past, using the Salesforce Lightning Design System (LDS) in Visualforce (or Lightning Components for that matter) required downloading the latest version from the home page and uploading it as a static resource to each Salesforce org that you wanted to use it on. I dread to think how many copies of exactly the same zip file have been uploaded over the last 18 months or so, but I’d imagine a significant amount of storage is currently dedicated to just this purpose. Probably only beaten out by a million copies of jQuery and Bootstrap. In the Spring 17 release of Salesforce, this is no longer the case - a single Visualforce tag can now do the heavy lifting for you.

The SLDS Tag

Simply adding <apex:slds /> to your page and nesting your markup in a div styled with the slds-scope class, and you are good to go. For example, the following page:

<apex:page showHeader="false" sidebar="false" standardStylesheets="false"
           standardController="Account" applyHTmlTag="false">
    <html xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
        <body>
            <apex:slds />
            <div class="slds-scope">
                <div class="slds-page-header" role="banner">
                    <div class="slds-grid">
                        <div class="slds-col slds-has-flexi-truncate">
                            <div class="slds-media slds-no-space slds-grow">
                                <div class="slds-media__figure">
                                    <svg aria-hidden="true" class="slds-icon slds-icon-standard-account">
                                        <use xlink:href="{!URLFOR($Asset.SLDS,
'/assets/icons/standard-sprite/svg/symbols.svg#account')}"></use> </svg> </div> <div class="slds-media__body"> <p class="slds-text-title--caps slds-line-height--reset">Account</p> <h1 class="slds-page-header__title slds-m-right--small slds-align-middle slds-truncate"
title="{!Account.Name}">{!Account.Name}
</h1> </div> </div> </div> </div> <ul class="slds-grid slds-page-header__detail-row"> <li class="slds-page-header__detail-block"> <p class="slds-text-title slds-truncate slds-m-bottom--xx-small" title="Description">
Description
</p> <p class="slds-text-body--regular slds-truncate" title="{!Account.Description}">
{!Account.Description}
</p> </li> <li class="slds-page-header__detail-block"> <p class="slds-text-title slds-truncate slds-m-bottom--xx-small" title="Industry">
Industry
</p>{!Account.Industry} </li> <li class="slds-page-header__detail-block"> <p class="slds-text-title slds-truncate slds-m-bottom--xx-small" title="Visualforce">
Visualforce
</p>No static resources were used! </li> </ul> </div> </div> </body> </html> </apex:page>

renders as:

Screen Shot 2017 04 15 at 12 29 30

which is pretty cool, and makes throwing a page together to test out some ideas in a new org a lot easier than it has been.

What about Images?

Without the LDS static resource, image references need to be handled a slightly different way, via the $Asset global. Use this wherever you’d use your static resource previously. E.g. in the example markup above, I use the $Asset global as follows:

<svg aria-hidden="true" class="slds-icon slds-icon-standard-account">
   <use xlink:href="{!URLFOR($Asset.SLDS, '/assets/icons/standard-sprite/svg/symbols.svg#account')}"></use>
</svg>

although continuing the pattern of making sure SVG is difficult to use, you have to add a custom namespace to the page:

<html xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">

and you can’t do that unless you turn off the standard Salesforce header, sidebar and stylesheets. If you see an SVG on a Salesforce page in the wild, take a moment to appreciate the hoops that the developer jumped though in order get it there.

So no more static resources?

Well that depends. The SLDS tag always pulls in the latest version of the Lightning Design System, so much depends on whether you want that behaviour.It means that things may change underneath you, possibly in a breaking way. If it’s for your internal Salesforce org and you have people who will be able to make any changes required by the latest version, then emphatically yes. If you are building pages for a consulting customer who expects them to continue working in the future with zero effort, then maybe not. As always, there is no substitute for thinking about how the application will be used, both now and in the future. 

Related Posts

 

Saturday, 11 March 2017

One Trigger to Rule Them All? It Depends.

One Trigger to Rule Them All? It Depends

Trg

Introduction

Anyone involved in Salesforce development will be familiar with triggers and the religious wars around how they should be architected. My view is that, like many things in life, there is no right answer - it all depends on the context. The options pretty much boil down to one of the two following options:

One Trigger per Object

This approach mandates a single trigger file that handles all possible actions, declared along the lines of:

trigger AccountTrigger on Account (
  before insert, before update, before delete,
  after insert, after update, after delete, after undelete) {

  // trigger body
}

One Trigger per Object and Action

This approach takes the view that each trigger should handle a single action on an objectf:

trigger AccountTrigger_bu on Account (before update) {

  // trigger body
}
...
trigger AccountTrigger_au on Account (after update) {

  // trigger body
}

I’ve read many blog posts stating flat out that the first way is best practice. No nuances, it’s just the right way and should be used everywhere, always. Typically the reason for this is that is how the author does it, therefore everyone should. I strongly disagree with this view. Not that one trigger per object shouldn’t be used, but that it shouldn’t be used without applying some thought.

Note: one trigger per object and action is the maximum granularity - never have two triggers for the same action on the same object as then you have no control over the order of execution and this will inevitably bite you. Plus you’ve striped business logic across multiple locations and made life harder for everyone.

Consulting versus Product Development

The reason I have this view is that I work across consultancy and product development at BrightGen. Most of the work I do nowadays is related to our business accelerators, such as BrightMEDIA, but I still have Technical Architect responsibility across a number of consulting engagements, which are often implementations for companies that don’t have a lot of internal Salesforce expertise, and what they have isn’t the developer skill set.

One message I’m always repeating to our consultants is to have some empathy with our customers and think about those that come after us. Thus we use clicks not code and try to take the simplest approach that will work, so that we don’t leave the customer with a system that requires them to come back to us every time they need to change anything. Obviously we like to take our customers into service management after a consultancy engagement, but we want them to choose to do that based on the good job that we have done, rather than because we have locked them in by making things complex.

Sample Scenario

So here’s a hypothetical example - as part of a solution I need to take some action when a User is updated, but not for any other trigger events. At some point later a customer administrator wants to know if there is any automated processing that takes place after a user is inserted to triage a potential issue.

If I’ve gone with the one trigger per object and action combination, they can simply go to the setup page for the object in question and look at the triggers. The naming convention makes it clear that the only trigger in place is to handle the case when a user is updated, so they can stop this particular line of enquiry (assuming my Evil Co-Worker hasn’t chosen an inaccurate name just to cause trouble).

Screen Shot 2017 03 11 at 07 29 35

If I’ve gone with one trigger per object, the administrator is none the wiser. There is a single trigger, but nothing to indicate what it does. The administrator then has to look into the trigger code to figure out if there is any after insert processing. What they will then find is one of two things:

  • A load of wavy if statements checking the type of action - before vs after, insert vs update etc - and then calling out to external code. Most developers try to make sure that an external method is called only once, so you often end up with a wall of if statements for the administrator to enjoy
  • Delegation to a trigger handler, leaving the admin to look at another source file to try to figure out what is happening.

Now I don’t know about you, but if administrators are having to look at my source code, and even worse trying to understand Apex code to figure out something as basic as this, I’d feel like I’d done a pretty poor job.

Enter the Salesforce Optimizer

The Spring 17 Release introduced the Salesforce Optimiser - an external tool that analyses your implementation and sends you the results - here’s what it has to say about my triggers:

Screen Shot 2017 03 11 at 07 54 33

And there’s the dogma writ large again - a big red warning alert saying I should have one trigger per object, just because. Don’t get me wrong, I think the Salesforce Optimizer is a great idea and has the potential to be a real time saver, and that the intention is to help, but it’s a really blunt instrument that presents opinion as fact.

The chances are at some point my customers will run this and ask me why I’ve gone against the recommended approach, even though in their case it is absolutely the appropriate approach. I find I have no problem explaining this to customers, but I do have to take the time to do that. Thanks for throwing me under the bus Salesforce!

In Conclusion

What you shouldn’t take away from the above is that one trigger per object is the wrong approach - in many situations it’s absolutely the right approach and it’s the one I use in some of my product development. In other situations it isn’t the right approach and I don’t use it. What you should take away is that it’s important to think about how to use triggers for every project you undertake - going in with a dogmatic view that there is one true way to do things and everything will be brute-forced into that may make you feel like a l33t developer but is unlikely to be helpful in the long term. it may also mark you out as a Rogue High Performer, and you really don’t want that.

 

Sunday, 5 March 2017

Salesforce DX Week 1

SalesforceDX Week 1

(NOTE: This post is based on the SalesforceDX pilot which, like all pilots, may never make it to GA. I bet it does though!)


Scratch

Introduction

The SalesforceDX pilot started a week or two ago and BrightGen were lucky enough to be selected to participate (thanks to the sterling efforts of my colleague Kieran Maguire who didn’t screw up his signup, unlike me!). This week I’ve managed to spend a reasonable amount of time reading the docs and trying out the basics and it’s clear already that this is going to be a game changer. 

There will be bugs!

While this isn’t the first  pilot that I’ve been involved in, but it’s by far the largest in terms of new functionality - a new version of the IDE, a new CLI with a ton of commands and a new type of org. A pilot is a two way street - you get to play with the new feature long before it becomes (if it ever does!) GA, but the flip side is that this won’t be tested to destruction like a GA feature. With the best will in the world there’s no way that Wade Wagner and co could test out every possible scenario, so some stuff will break, and that’s okay. When things break (or work in a non-intuitive way) you sometimes get a chance to influence how the fix works, which is pretty cool. Be a grown up though - report potential issues in a measured way with as much detail as you can gather - it’s always embarrassing when you have to climb down from a high horse when you realise that you made the mistake, not the tool!

Scratch Orgs

Scratch orgs are probably the feature I’ve been most excited about in SalesforceDX. I run the BrightMEDIA team at BrightGen and setting up a developer org for a new member of our team takes around half a day. After the initial setup, every release needs to be executed on each dev org as well as the target customer or demo org(s), which consumes a fair amount of time with a weekly release cadence. There’s also the problem of experimentation - often devs will try something out, realise it’s not the best way to do it, but not tear down everything they built. Over time the dev org picks up baggage which the dev has to be careful doesn’t make its way into version control.

Scratch orgs mitigate the first problem and solve the second. A scratch org is ephemeral - it is created quickly from configuration and should only last for the duration of the development task you are carrying out. When we setup a developer edition we have to contact Salesforce support to get the apex character limit increased and multi-currency enabled. Scratch orgs already have an increased character limit and features can be defined in the configuration. Here’s the scratch org configuration file for one of my projects:

{
  "Company": "KAB DEV",
  "Country": "GB",
  "LastName": "kbowden",
  "Email": "keir.bowden@googlemail.com",
  "Edition": "Developer",
  "Features": "Communities;MultiCurrency",
  "OrgPreferences" : {
    "ChatterEnabled": true,
    "S1DesktopEnabled" : true,
    "NetworksEnabled": true,
    "Translation" : true,
        "PathAssistantsEnabled" : true
  }
}

The features attribute : 

"Features": "Communities;MultiCurrency"

enables communities and multi-currency when my org is created, saving me a couple of hours raising a case and waiting for a response right off the bat.

Creating a Scratch Org

Is a single command utilising the new CLI:

> sfdx force:org:create --definitionfile config/workspace-scratch-def.json

 and it’s fast. I’ve just created an org for the purposes of this blog and I’d be surprised if it took more than a minute, although the DNS propagation of the new org name can take a few more minutes. You don’t have to worry about passwords with scratch orgs, it’s all handled by the CLI. To “login” I just execute:

> sfdx force:org:open

and a browser window opens and I’m good to go. Accessing the Manage Currencies setup node shows that multi-currency has indeed been enabled.

Screen Shot 2017 03 04 at 15 46 04

There’s a bit more to it than this in our case - a few packages have to be installed for example - but so far it looks like I can script all of this, which means a new developer just runs a single command to get an org they can start work in. Note that there’s just the standard developer edition data in here - I haven’t found time to play with the data export/import side of the CLI yet so that will have to wait for another day.

Managing Code

If you are familiar with the git paradigm of pulling and pushing changes from/to a remote location, the SalesforceDX source management is simple to pick up. You don’t get version control, but you do get automatic detection of what has changed and where. The docs state that this functionality is only available for scratch orgs and we still have to use the metadata API to push to sandbox/production orgs, which seems fair enough a pilot to me.

Detecting Differences

In my scratch org I create a simple lightning component in the developer console:

<aura:component >
	<h1>I'm a simple Lightning Component</h1>
</aura:component>

In current development process I have a script to extract the Lightning metadata and copy it into my source directory. With scratch orgs it’s a fair bit easier.

I can figure out what has changed by running the status subcommand:

> sfdx force:source:status

State Full Name Type Workspace Path
────────── ───────── ──────────────
Remote Add Simple AuraDefinitionBundle

Pulling Code from the Scratch Org

 to extract the new code from the org to my workspace in local filesystem:

> sfdx force:source:pull


State Full Name Type Workspace Path
─────── ───────── ──────────────────── ───────────────────────────────────────────────────────────
Changed Simple AuraDefinitionBundle /Users/kbowden/SFDX/Blog/force-app/main/default/aura/Simple

I can then list the contents of my workspace and there is my new component:

> ls force-app/main/default/aura/

Simple

Pushing Code to the Scratch Org

If I edit the component locally, the status subcommand picks that up too:

> sfdx force:source:status

State Full Name Type Workspace Path
───────────── ───────────────── ──────────────────── ──────────────────────────────────────────────────────
Local Changed Simple/Simple.cmp AuraDefinitionBundle force-app/main/default/aura/Simple/Simple.cmp-meta.xml
Local Changed Simple/Simple.cmp AuraDefinitionBundle force-app/main/default/aura/Simple/Simple.cmp

and I can publish these changes to the scratch org via the push subcommand:

> sfdx force:source:push

State Full Name Type Workspace Path
─────── ───────── ──────────────────── ──────────────────────────────────
Changed Simple AuraDefinitionBundle force-app/main/default/aura/Simple
 

Screen Shot 2017 03 05 at 07 49 03

Scratch Orgs are Temporary

Unlike developer orgs, scratch orgs are not intended to persist. In fact I’ve seen docs that state they may be deleted at any point in time. Although I’d imagine in reality it will be based on lack of use, it doesn’t matter as if your scratch org disappears, you can just spin up a new one with the same setup, push your local code and you are back where you were. This does mean you need to treat your local filesystem as the source of the truth, but that’s pretty much how I work anyway.

This way scratch orgs don’t accumulate any baggage, and you don’t have to worry about destroying anything. If you don’t put it into version control, it won’t be there in the future.

One Org Shape to Rule them All

The configuration, data and code that make up your scratch org can be considered a template, especially if the setup is all scripted. This means that my team and I just need to update a single org shape “template" with changes that need to be applied to every development environment. Then we just spin up new scratch orgs and we can be sure that we are all in step with each other, which will save us time on many levels. 

Related Posts

 

Sunday, 19 February 2017

Lightning Design System in Visualforce Part 2 - Forms

Lightning Design System in Visualforce Part 2 - Forms

(Update 20/02/2017 - added the sample to the github repo - see the Any Code? section) 

Jason

Introduction

In Part 1 of this series I covered getting started with the lightning design system for Visualforce developers. The example in that post was a page with a thin veneer of Visualforce, but with content that was pretty much vanilla HTML. In this post I’ll be making much more use of standard Visualforce components, which means I have to make some compromises. What I’m looking for here is to marry the speed of Visualforce development (provided by the standard component library) with the the modern styling of the Lightning Design System (LDS) rather than a pixel for pixel match with the Lightning Experience. Done is better than perfect!

Spring 17 and the LDS

<apex:slds>

In the original post the LDS was uploaded as a static resource, but Spring 17 means that this is no longer necessary - as long as you can live with the consequences.

A new Visualforce tag is available - <apex:slds>. This brings in the latest version of the LDS, hence my reference to consequences. If you can accept always being upgraded to the latest version as soon as it is available, this is the tag for you. If you need to fix the version (which I think I would, so that customer users don’t suddenly get presented with an unexpected change) then stick with the static resource. There are a few rules around using this tag, which are explained in the official docs.

<apex:page showHeader="false" sidebar="false" standardStylesheets="true"
           standardController="Contact" applyHTmlTag="false">
    <html xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
        <head>
            <apex:slds />
        </head>
        <body class="slds-scope">
                 ...
        </body>
    </html>
</apex:page>

As I’m including the SLDS via the HTML header, I have to specify the slds-scope class for the body tag in order to be able to use the SLDS tags. Interestingly the docs state that if I’m showing the header, sidebar or using the standard stylesheets then I can’t add attributes to the html tag and thus SVG icons aren’t supported. However, I am using the standard stylesheets and they are still working for me, at least in Firefox, so go figure. If this doesn’t work for you, you’ll need to switch the icons to another format.

$Asset

If you don’t upload the LDS as a static resource, you’ll need to get the assets (icons etc) from the system default. Enter the $Asset global variable, another new feature in Spring 17. Simply use $Asset.SLDS in place of your static resource, and you can access assets via the URLFOR function. Again more details in the official docs.

<div class="slds-media__figure">
    <svg aria-hidden="true" class="slds-icon slds-icon-standard-contact">
        <use xlink:href="{!URLFOR($Asset.SLDS, '/assets/icons/standard-sprite/svg/symbols.svg#contact')}"></use>
    </svg>
</div>

Styling Inputs

The key to applying the LDS to standard Visualforce form components is the styleClass attribute - this allows a custom style to override the standard Visualforce styling that we all know and love (!). 

Using a Visualforce standard component inside an SLDS styled form element doesn’t look too bad - just a little truncated, The following markup:

<apex:inputField value="{!Contact.FirstName}"/>

generates:

Screen Shot 2017 02 19 at 07 41 37

Supplying the SLDS style class fixes this:

<apex:inputField styleClass="slds-input" value="{!Contact.FirstName}"/>

 

Screen Shot 2017 02 19 at 07 46 42

Buttons

Buttons are another simple fix - I can still use command buttons, just styled for the LDS:

<div class="slds-p-horizontal--small slds-m-top--medium slds-size--1-of-1 slds-align--absolute-center">
    <apex:commandButton styleClass="slds-button slds-button--neutral" value="Cancel" action="{!cancel}" />
    <apex:commandButton styleClass="slds-button slds-button--brand" value="Save" action="{!save}" />
</div>

Screen Shot 2017 02 19 at 08 14 12

One size does not fit all

While the style class works well for simple inputs, fields which require more complex widgets are where the compromises come in. Lookups, for example, are very different in the LDS and Visualforce. In this case I have to live with the fact that the search will produce a popup window and the input will have a magnifying glass, but I add some styling to make it less jarring on the user:

<apex:inputField style="width:97%; line-height:1.875em;" value="{!Contact.AccountId}" />

which renders as:

Screen Shot 2017 02 19 at 07 54 55

So not perfect but not terrible either.

Required Fields

Required fields mean a bigger compromise, as I have to add the required styling myself. My page markup therefore knows which fields are required and which aren’t, which in turn makes the page less flexible. If an administrator makes one of the fields required, basic Visualforce skills are required to change the page to reflect this:

<div class="slds-form-element slds-hint-parent">
    <span class="slds-form-element__label"><abbr class="slds-required" title="required">*</abbr>Last Name</span>
    <div class="slds-form-element__control">
        <apex:inputField styleClass="slds-input" value="{!Contact.LastName}"/>
    </div>
</div> 

Screen Shot 2017 02 19 at 08 08 17

The end result

So here’s the final page - clearly not an exact match for LEX, but pretty close, and put together very quickly.

Screen Shot 2017 02 19 at 08 10 11

Any code?

As usual with LDS posts, the code is in my LDS Samples Github Repository. There’s also an unmanaged package available to save wasting time copying and pasting - see the README.

In Conclusion

Pragmatism is key here - there are some compromises around styling and losing some of the separation of the page and business logic, but I feel these are outweighed by the sheer speed of development. Of course I could switch to using vanilla HTML with LDS styling and manage the inputs via JavaScript, but if I’m going that route I’ll go the whole hog and use Lightning Components.

Related Posts

 

Saturday, 14 January 2017

Salesforce Platform Cache - Expect the Unexpected

Platform Cache - Expect the Unexpected

Cache

Introduction

When the Salesforce Platform Cache appeared, this allowed a lot of home made caching code to be retired. Up until the the only data that was cached was custom settings. (This is in terms of not requiring a trip to the database, there are obviously all sorts of caches at play in the Salesforce platform - reports involving large amounts of data often succeed on the second or third try as some of the data has made it’s way closer to the front end). While custom settings speed up the retrieval of data and save SOQL calls, it was a pretty basic solution, requiring DML to push information into the ‘cache’, and the type of data that could be stored was pretty basic, so caching and rehydrating a complex object meant you had to handle the transformation yourself. 

In this post I won’t be going into the details of getting started with the cache, as the Trailhead module does an excellent job of this already.

Seek, and you may not find it

A key feature of the platform cache is that just because you put something in the cache, it doesn’t mean that it will still be there at a later date, thus you have to be prepared to deal with cache misses. Thus if I store an opportunity keyed by it’s id in a Blue Peter style partition I created earlier:

Cache.OrgPartition oppsPart=Cache.Org.getPartition('Opportunities');
Opportunity opp=[select id, Name, CloseDate from Opportunity where id='0060Y000003AWjG'];
oppsPart.put('0060Y000003AWjG', opp);

I can try to retrieve it in a different request:

Cache.OrgPartition oppsPart=Cache.Org.getPartition('Opportunities');
Opportunity oppFromCache=(Opportunity) oppsPart.get('0060Y000003AWjG');
System.debug('Opp = ' + oppFromCache);

and all things being equal I'll get the following debug output:

12:27:52:158 USER_DEBUG [3]|DEBUG|Opp = Opportunity:{Name=Edge Installation,
Id=0060Y000003AWjGQAW, CloseDate=2014-11-02 00:00:00}

If the opportunity has been booted from the cache for any reason (expired, an evil co-worker removes it) then I'll receive a null rather than the opportunity, and can fetch it from the database.

You have to know what you are asking for

Based on the above, if you are doing a lot of work with opportunities matching a particular criteria, it’s tempting to try to use the cache as a database replacement, storing all of them in there keyed by id:

Cache.OrgPartition oppsPart=Cache.Org.getPartition('Opportunities');
List<Opportunity> opps=[select id, Name, CloseDate from Opportunity];
for (Opportunity opp : opps)
{
	oppsPart.put(opp.id, opp);
}

I can then retrieve these by iterating all of the keys in the partition:

Cache.OrgPartition oppsPart=Cache.Org.getPartition('Opportunities');
List<Opportunity> opps=[select id, Name, CloseDate from Opportunity];
for (Opportunity opp : opps)
{
	oppsPart.put(opp.id, opp);
}

which gives me the following output:

12:42:40:176 USER_DEBUG [6]|DEBUG|Opp = Opportunity:{Name=Edge SLA,
Id=0060Y000003AWjHQAW, CloseDate=2014-11-02 00:00:00}
12:42:40:180 USER_DEBUG [6]|DEBUG|Opp = Opportunity:{Name=United Oil
Installations, Id=0060Y000003AWjFQAW, CloseDate=2014-11-02 00:00:00}
...
12:42:40:261 USER_DEBUG [6]|DEBUG|Opp = Opportunity:{Name=Burlington Textiles
Weaving Plant Generator, Id=0060Y000003AWjOQAW, CloseDate=2014-11-02 00:00:00}

At first glance this looks fine, but there is a huge issue with this approach - I have no idea whether this is the full collection of opportunities that I cached. Entries might have been evicted, even while I was iterating the opportunities I’d queried if the partition filled up, or an evil co-worker may remove a couple and then add them back later so that I have no idea why they weren’t processed.

Unlike executing  SOQL query, all I can be sure of is that I’ve retrieved all of the opportunities that remain in the cache, which may bear very little relation to the contents of the database. Clearly another approach is required. 

Cache collections as a single entry

In this scenario, rather than storing each object as an entry the solution is to store the entire collection as a single entry:

Cache.OrgPartition oppsPart=Cache.Org.getPartition('Opportunities');
List<Opportunity> opps=[select id, Name, CloseDate from Opportunity];
oppsPart.put('all', opps);

retrieving it in a separate request as follows:

Cache.OrgPartition oppsPart=Cache.Org.getPartition('Opportunities');
List<Opportunity> oppsFromCache=(List<Opportunity>) oppsPart.get('all');
for (Opportunity opp : oppsFromCache)
{
	System.debug('Opp = ' + opp);
}

In this case, if my opportunities have been evicted from the cache I’ll receive a null response and can re-query them from the database. Of course this doesn’t stop an evil co-worker from overwriting the ‘all' entry in the cache with a different collection of opportunities - if this continues to be a problem then a session cache partition, which is tied to a specific user, is probably a better option.

More Information

Friday, 30 December 2016

JavaScript Promises in Lightning Components

JavaScript Promises in Lightning Components

Promise

Introduction

Promises have been around for a few years now, originally in libraries or polyfills but now natively in JavaScript for most modern browsers (excluding IE11, as usual!).

The Mozilla Developer Network provides a succinct definition of JavaScript promises:

A Promise is a proxy for a value not necessarily known when the promise is created

Promises can be in one of three states:

  • pending - not fulfilled or rejected (Heisenberg, for Breaking Bad fans!)
  • fulfilled  - the asynchronous code successfully completed
  • rejected - an error occurred executing the asynchronous code

For me, the key advantage with Promises is that they allow asynchronous JavaScript code to be written in a way that looks somewhat like synchronous code, and is thus easier for someone new to the implementation to understand.

Creating a Promise

var promise = new Promise(function(resolve, reject) {

  // asynchronous code goes here

  if (success) {
     /* everything worked as expected */
     resolve("Excellent :)");
  }
  else {
    /* something went wrong */
    reject(Error("Bogus :("));
  }
});
 

The Promise constructor takes a callback function as a parameter. This callback function contains the asynchronous code to be executed (to retrieve a record from the Salesforce server, for example). The callback function in turn takes two parameters that are also functions - note that you don’t have to write these functions, just invoke them based on the outcome of the asynchronous code:

  • resolve - this function is invoked if the asynchronous code executes successfully. Executing this function moves the Promise to the fulfilled state
  • reject - this function is invoked with an Error if anything goes wrong in the asynchronous code. Executing this function moves the Promise to the rejected state.

Handling the Result

Thus far all well and good, but pretty much all of the asynchronous code that I write is carrying out a remote activity and returning the result, so I need some way to be notified when the asynchronous code has completed. Enter the Promise.then() function:

promise.then(function(data) {
                alert('Success : ' + data);
             },
             function(error) {
                alert('Failure : ' + error.message);
             });

Promise.then() takes two functions as parameters - the first is a success callback, invoked if and when the promise is resolved, and the second is an error callback, invoked if and when the promise is rejected.

A Real World Example

When building Lightning Components, asynchronous interaction with the Salesforce server is typically carried out via an action. The following function takes an action and creates a Promise around it:

executeAction: function(cmp, action, callback) {
    return new Promise(function(resolve, reject) {
        action.setCallback(this, function(response) {
            var state = response.getState();
            if (state === "SUCCESS") {
                var retVal=response.getReturnValue();
                resolve(retVal);
            }
            else if (state === "ERROR") {
                var errors = response.getError();
                if (errors) {
                    if (errors[0] && errors[0].message) {
                        reject(Error("Error message: " + errors[0].message));
                    }
                }
                else {
                    reject(Error("Unknown error"));
                }
            }
        });
	$A.enqueueAction(action);
    });
}

the executeAction function instantiates a Promise that defines the action callback handler and enqueues the action. When the action completes, the callback handler determines whether to fulfil or reject the Promise based on the state of the response.

This function can then be used to create a Promise to retrieve an account: 

var accAction = cmp.get("c.GetAccount");
var params={"accountIdStr":accId};
accAction.setParams(params);
        
var accountPromise = this.executeAction(cmp, accAction);

and callback handlers provided to process the results:

accountPromise.then(
        $A.getCallback(function(result){
            // We have the account - set the attribute
            cmp.set('v.account', result);
        }),
        $A.getCallback(function(error){
            // Something went wrong
            alert('An error occurred getting the account : ' + error.message);
        })
     );

Note that the success and error callbacks are encapsulated in $A.getCallback functions as they are executed asynchronously, and therefore are outside of the Lightning Components lifecycle. Note also that if you forget to do this, quite a lot of the promise functionality will still work, which will make it difficult to track down what the exact problem is!

Chaining Promises

The Promise.then() function can return another Promise, thus setting up a chain of asynchronous operations that each complete in turn before the next one can start. Repurposing the above example to retrieve a Contact from the Account:

accountPromise.then(
        $A.getCallback(function(result){
            // We have the account - set the attribute
            cmp.set('v.account', result);

            // return a promise to retrieve a contact
            var contAction = cmp.get("c.GetContact");
            var contParams={"accountIdStr":accId};
            contAction.setParams(contParams);
            var contPromise=self.executeAction(cmp, contAction);
            return contPromise;
        }),
        $A.getCallback(function(error){
            // Something went wrong
            alert('An error occurred getting the account : ' + error.message);
        })
   )
   .then(
        $A.getCallback(function(result){
            // We have the contact - set the attribute
            cmp.set('v.contact', result);
        }),
        $A.getCallback(function(error){
            // Something went wrong
            alert('An error occurred getting the contact : ' + error.message);
        })
     );
    

However there is a side effect here - the second then() is executed regardless of the success/failure of the first. If the first Promise was rejected, the success callback for the second then() is executed with a null value. While this would be benign in the above example, it’s probably not behaviour that is desired in most cases. What would be better is that the second then() is only executed if the first one is successful. Enter the Promise.catch() function.

Catching Errors

The Promise.catch() function is invoked if a Promise is rejected, but the then() function didn’t provide an error callback:

promise.then(function(data) {
                alert('Success : ' + data);
             })
        .catch(function(error) {
                alert('Failure : ' + error.message);
             });

When chaining Promises, the catch() function becomes more powerful, as if a Promise is rejected and the then() function did not provide an error callback, control moves forward to either the next then() function that does provide an error callback, or the next catch() function.

Refactoring the example again:

accountPromise.then(
        $A.getCallback(function(result){
            // We have the account - set the attribute
            cmp.set('v.account', result);

            // return a promise to retrieve a contact
            var contAction = cmp.get("c.GetContact");
            var contParams={"accountIdStr":accId};
            contAction.setParams(contParams);
            var contPromise=self.executeAction(cmp, contAction);
            return contPromise;
        })
   )
   .then(
        $A.getCallback(function(result){
            // We have the contact - set the attribute
            cmp.set('v.contact', result);
        })
   .catch(
        $A.getCallback(function(error){
            // Something went wrong
            alert('An error occurred : ' + e.message);
        })
     ); 

The second then() function is now only executed if the account retrieval is successful. An error occurring retrieving either the account or the contact immediately jumps forward to the catch() function to surface the error.

Going back to the original point made in the introduction, I now have two asynchronous operations, with the second dependent on the success of the first, but coded in a readable fashion.

Further Reading

Promises are a tricky one to wrap your head around, and its certainly worth spending some time learning the basics and playing around with examples. I've found the following resources very useful:

Related Posts