Saturday, 14 April 2018

Managing a List of New Records with Lightning Components

Managing a List of New Records with Lightning Components

Nearly 7 years ago I wrote a blog post on Managing a list of New Records in Visualforce, and I thought it would be interesting to recreate this using Lightning Components and compare  the two.

The basic concept is that I have a collection of new account records and I want to be able to enter details, add or remove rows and save once I’m happy. In Lightning I’ve created a couple of components to manage this - one that looks after the collection of records (NewAccounts) and one that captures the input for a single account (NewAccount).  In the following screenshot each row equates to the NewAccount component:

Screen Shot 2018 04 14 at 16 45 14

One interesting aspect is that the list of records is managed outside of the rows, but each row has a button to allow it to be deleted. While I could try to juke around with the styling to line things up, this is an excellent use case for a Component Facet. A Component Facet is an attribute passed to a contained component that is itself a collection of components. As it is defined in the outer component, it can reference aspects of the outer component. In my case it defines the controller function called when the user clicks the delete button and includes the index of of the element in the collection of records in the button name, so that I can easily locate and remove the element:

<c:NewAccount account="{!account}" index="{!index}">
  <aura:set attribute="buttons">
    <div class="slds-form-element">
      <label class="slds-form-element__label">Actions</label>
      <div class="slds-form-element__control">
        <lightning:button label="Delete" onclick="{!c.deleteRow}"
                          name="{!'btn' + index}" />
      </div>
    </div>
  </aura:set>
</c:NewAccount>

The first major difference is that in Visualforce I have to create my collection of Account records server side, in the constructor of the page controller, while in Lightning my NewAccounts component creates these in it’s init handler:

init : function(component, event, helper) {
    var accounts=[];
    for (var idx=0; idx<5; idx++) {
        accounts.push({sobjectType:'Account'});
    }
    component.set('v.accounts', accounts);
}

The only field that I’m defining when I create each account record is the sobjectType - I don’t think that I actually need this, as on the server side I use a strongly typed array of Account records, but I find it’s a great habit to get into. In terms of the user experience there’s probably not a lot to choose here though - the Visualforce page will take a short while to be created and returned, and Lightning pages are hardly .. lightning fast.

However, all that changes when the user adds or deletes rows. In Visualforce I have to send the list of records back to the server and then carry out the appropriate action. In my lightning component, this is handled in the JavaScript controller. for example when deleting a row:

deleteRow : function(component, event, helper) {
    var name=event.getSource().get("v.name");
    var index=name.substring(3);
    var accounts=component.get('v.accounts');
    accounts.splice(index, 1);
    component.set('v.accounts', accounts);
}

I get the index from the name which has the format ‘btn<index>’ so I just use the array substring prototype function to strip off the ‘btn’ and then use the Array.splice prototype function to remove the element at that position.

In Visualforce I’d probably show some kind of spinner to let the user know that something has happened, whereas in Lightning this happens so quickly there’s no chance for me to get in between and show something. If I really want to draw the users attention, I’d use CSS to highlight the element that was created or do some kind of slow motion hiding of the element before removing it. 

When the user decides to save is the place where I have to do more work in Lightning. In Visualforce I would simply bind the button to a server side action, insert the updated accounts property, and set a page message, maybe after some checking of how many were populated etc. In Lightning I have to figure out the populated records, instantiate a controller action, add my records as s property, hand over to apex and then process the results. While it sounds like a fair bit to do, it actually isn’t that bad, especially if I create a utility function to process the response that all of my components can utilise, which I do every time for production code.

saveRows : function(component, event, helper) {
    var accounts=component.get('v.accounts');
    var toSave=[];
    for (var idx=0; idx<accounts.length; idx++) {
        if ( (null!=accounts[idx].Name) && (''!=accounts[idx].Name) ) {
            toSave.push(accounts[idx]);
        }
    }
    var accAction = component.get("c.SaveAccounts");
    var params={"accountsStr":JSON.stringify(toSave)};
    accAction.setParams(params);
    accAction.setCallback(this, function(response) {
        var state = response.getState();
        if (state === "SUCCESS") {
            var toastEvent=$A.get("e.force:showToast");
            if (toastEvent) {
                toastEvent.setParams({
                        "type":'success',
                        "title":'Success',
                        "message":'Accounts saved'
                });
                        toastEvent.fire();
            } 
        }
        else if (state === "ERROR") {
            var errors = response.getError();
            if (errors) {
                if (errors[0] && errors[0].message) {
                    reject(Error("Error message: " + errors[0].message));
                }
            }
            else {
                reject(Error("Unknown error"));
            }
        }
    });
    $A.enqueueAction(accAction); 
}

Note that I’m sending my list of records back as a JSON string, a habit I got into when the Lightning Components framework had problems with array parameters. I still use it occasionally so that my controller methods can handle multiple types of parameters. I’m always in two minds as to whether this is a good thing - it makes the code more flexible, but more difficult to understand what is going on without appropriately typed parameters. 

There’s not a lot more to the code, but if you want the full picture it’s available on Github.

Related Posts

 

Saturday, 31 March 2018

Building My Own Learning System - Part 6

Building My Own Learning System - Part 6

Versions

Introduction

In previous posts in this series I covered the initial idea and development through to sharing the code with installation, configuration etc instructions. Now that I have the basics working I’ve started to iterate and add a few features. Oddly the first feature I added was the one that I most recently thought of - the ability to display a custom message to the user when they complete a path, retrieved from the path itself. I’m thinking that this will allow me to carry use the system for more interesting challenges - complete a path to get a keyword, or a location, something like that anyway,. I did say I’d just thought of it, not that I’d thought it through!  It was only a few lines of code to implement this, just a change to the PassStep method so that it returned a tuple of a completed state and message to display, A few tweaks to the unit tests and I carried out an sfdx deployment to my sample endpoint and verified with my updated client code that all was working as expected and my sample Bob Buzzard character path was displaying the custom message:

Screen Shot 2018 03 31 at 16 14 48

Then it hit me - anyone that didn’t have my new client would get errors when accessing the sample endpoint, and they’d have to dig into the debug logs to figure out why. Suddenly it was important to add another feature. (Looking at my sample endpoint it appears that there’s only me using it at the moment anyway, so I guess if you are going to break things then the earlier the better!).

Versioning

Like most things in the tech world, there are loads of different ways to handle versioning. I considered the Salesforce route of having different endpoints for each version and discounted it. I’m not convinced that I want to be supporting older versions and I don’t particularly want to get into managing a number of classes representing historic versions. Salesforce can do this because they have more than one person maintaining things in their downtime, and if I ever become a multi-billion dollar company I’ll reconsider.

The route I ended up going was defining the version for each of the client and server in code, and having the client send its version with every request to an endpoint. The endpoint then compares this to its version and decides if it can handle the request. This gives me the option of supporting older versions if I want to, without committing me to any level of service! If the endpoint can't handle the request it throws an exception indicating what needs to happen - either upgrade the client or ask the admin of the endpoint to upgrade that. The updated client displays the error message to the user who can jump on any required action.

Installing the Latest ... Version

The current version of the system is now V1.0, and I’ve created github releases for both the endpoint and client, both of which have been tagged as V1.0. 

The unmanaged package the client V1.0 release is : <Salesforce instance>/packaging/installPackage.apexp?p0=04t0O000001IqIm

Related Information

As I plan to continue with these posts as I add new features or learn that I’ve made a terrible mistake should have done things differently, I’ve moved the list of posts into a dedicated page on this blog. 

 

Saturday, 24 March 2018

Building My Own Learning System - Part 5

Building My Own Learning System - Part 5

Backend server theres something very familiar about all this

Introduction

In Part 1 of this blog series I covered the problem I was trying to solve (on-boarding/accrediting internal/external staff using common content, but without opening up everything to the entire world) and the data model I was using to manage this. Part 2 was around the fledgeling user interface and a fake service to prove confidence in the method. Part 3 covered the backend, or at least the initial implementation of this - as long as there is a local interface implementation to connect to it, the concrete backend can live anywhere. Part 4 walked through the front end setup and configuration and shared the code.

In part 5 I’ll cover installing the backend code, setup of a remote endpoint, creating a training path and configuring the new endpoint for your client to access. I’ll also share the code.

Code First

 The back end code lives at : https://github.com/keirbowden/bbtrnremote

Installation

As the back end manages the data, an unmanaged package isn’t an option as it would mean recreating all training paths etc each time there was an update. As I mentioned in Part 4, I don’t think a private managed package is the right thing for something that people might install in orgs with real data, so the back end is intended to be installed as a discrete set of components. For example, using the Salesforce CLI you could deploy from the cloned repository directory using 

    sfdx force:mdapi:deploy -d src -u <username>

where <username> is from an org that you’ve previously authorised. If you are using something other than the Saleforce CLI then best of luck - I’d switch to the CLI myself ;)

Configuration

There’s a bit more to the back end compared to the front end:

  1. Enable MyDomain (there is a Lighting action and overrides, although not as many as I’d like)
  2. Assign the Training Admin permission set to your user
  3. Create a tab for Training Paths - everything else is reachable from that
  4. Create a Training Path, including at least one step and at least one question in that step. You can also create a badge to go with it if you want -  the code will handle it either way.
  5. Create a Force.com site and note down it’s address.
  6. Add the Training Site permission set to the Guest User for the site (via Public Access Settings -> View Users)

Then switch over to your client org and configure the endpoint:

  1. Add a Training Endpoint custom metadata entry - name and label as you desire and the following fields populated:

    Hostname: https://<site address>
    Path: /services/apexrest/TrainAPI
    Rewrite Image Links: Checked

  2. Add the site address to the remote site settings

And away you go. If you get any errors, have a look at the debug logs. Typically errors will be data related and I find that the stack trace in the client logs shows me what the problem is.

Caveat Emptor

Same as with the front end, the error handling is pretty basic, I just let the errors make their way back to the client. If you are authoring a training path, make sure you have a test front end to try it out on before you make it available to your users.

Same as the front end again, nothing is labels. 

Creating a training path and most of the associated data is via the regular Salesforce object pages, so be prepared to traverse a bit. The exception to this is when creating a question. The New Question action on the Training Step page will create a new question and take you to a Lightning page that allows you to manage the question and all of it’s associated answer on a single page. Over time more of this type of assistance will be added. I haven’t really focused on it yet as this is the kind of thing that admins rather than users will be accessing as a rule.

Conclusion

If you hit problems, raise an issue in the appropriate Github repo:

I’m not sure what will be in the next instalment. I might go through some of the code in more detail, or there might be new features to talk about. Stay tuned.

Related Posts

 

 

Saturday, 17 March 2018

Building my own Learning System - Part 4

Building my own Learning System - Part 4

Teach

Introduction

In Part 1 of this blog series I covered the problem I was trying to solve (on-boarding/accrediting internal/external staff using common content, but without opening up everything to the entire world) and the data model I was using to manage this. Part 2 was around the fledgeling user interface and a fake service to prove confidence in the method. Part 3 covered the backend, or at least the initial implementation of this - as long as there is a local interface implementation to connect to it, the concrete backend can live anywhere.

Now that I’ve been through the building blocks, it’s time to get into the code and also mention a couple of interesting features that I’ve put in place and share the front end code.

Show me the code!

The front end code lives at https://github.com/keirbowden/bbtrn

Installation

If you want to try this out yourself, here’s the approach I’d recommend. 

First configure MyDomain - you can’t use lightning components without this.

While you could just deploy the front end code using the Salesforce CLI (or one of the legacy tools, such as ant) I’d recommend using the unmanaged packages. There are two of these, containing the following items:

  • The custom metadata types to configure the endpoints and the implementation of the service
    <salesforce URL>/packaging/installPackage.apexp?p0=04t0O000001Ehcd

  • Everything else - UI lightning components, data accessor, service implementation
    <salesforce URL>/packaging/installPackage.apexp?p0=04t0O000001IqIm

I’ve split these into two packages because the configuration should be static, so ideally that will be installed once and only the contents of custom metadata types will change. The package containing everything else will change as new features are added. While as an unmanaged package this can’t be upgraded, as the data is stored elsewhere (the training content endpoints) uninstalling the old version and installing the new one doesn’t lose anything so seems like a reasonable approach. Why an unmanaged package I hear you ask? Mainly because this is unlikely to hit the app exchange so I’d be asking everyone to trust me and install code that they couldn’t see in their orgs. While I’m a trustworthy guy, this didn’t feel like the right thing to do

The backend code doesn’t really lend itself to an unmanaged package, as there will be plenty of data to recreate, and I didn’t want to use a managed package for the reason mentioned above, so I’d recommend using the Salesforce CLI or similar to deploy via metadata.

Of course you can always install the code in your own packaging org and build your own package (managed or unmanaged) from it. Worst case is you might have to do some jiggery pokery when I push new features, as I won’t be taking that into account. 

Configuration

To begin with, I’d recommend configuring things to use my example endpoint via the following steps:

  1. Create a new instance of the Training_Config custom metadata type with the following settings:
    Label/Name : Default
    Service Implementation : TrainingServiceRemoteImpl

  2. Create a new instance of the Training_Endpoint metadata type with the following settings:
    Label: Bob Buzzard
    Name: Bob_Buzzard
    Hostname: https://trainrem-developer-edition.eu8.force.com
    Path: /services/apexrest/TrainAPI
    Rewrite Image Links: Checked

  3. Add the training endpoint hostname https://trainrem-developer-edition.eu8.force.com to your remote sites, otherwise you’ll get errors when attempting to callout

  4. Edit the Training lightning app page and make it available for your profile

Then navigate to said page and away you go.

Note:t the front end sends your email address to the back end - this is purely used to identify your requests, but you are trusting me not to spam you (I won’t because what’s in it for me?).

Interesting Features

  • Restricting Access to Paths

    As you may want to beta test content, a training endpoint has the concept of opening up a training path to a selected group of users. In the sample back end we only know about the user’s email address, so this is how it is controlled. You can create a Candidate Restrictions sobject instance, which defines a domain and the addresses with that domain that are or are not allowed access, and then link this to a Training Path via the Training Path Candidate Restriction junction object, If there are no restrictions, a training path is open for anyone to access. Not that this shouldn’t be consider any kind of secure authorisation system, it’s purely a simple way to stop people being presented with a path before you are read for them to see it. If you need to lock things down, protect the endpoint via authentication

  • Wait Your Turn

    If you specify the Hours Between Attempts field on a training path and a user answers the questions incorrectly, they will be made to wait until at least that number of hours have elapsed before trying again. Hopefully this will cut down on the number of people guessing their way through paths. Probably not, but you can only go so far without reinventing web assessor!

Caveat Emptor

The error handling is fairly basic, mainly because the errors are typically down to bad data/setup at the remote endpoint, so I usually catch them before users do. 

 Nothing is labels yet - that’s on my todo list, but it’s all hardcoded English strings for now.

Conclusion 

If you hit any problems, raise an issue in the appropriate git repo. I’ve done quite a bit of testing, but if there’s one thing 30+ years in the software industry has taught me, it’s that as soon as I let anyone loose on my stuff it gets broken. I may just take it down your issue on my invisible typewriter and file it in the bin, but equally I might fix it, so it’s worth rolling the dice. 

In the next instalment of this series, I’ll share the backend code and what you need to do to create your own training endpoint and paths.

Related Posts

 

Sunday, 11 March 2018

Turning on the Lightning Locker Service

Turning on the Lightning Locker Service

Nuke

Introduction

This week I turned on the Locker service for an application that I wrote several years ago. It’s a few “pages” built from a fairly large number of custom Lightning components with a lot of JavaScript business logic. The application itself works fine without the Locker service, but there’s more and more standard components and features that I’d like to use, but that are only available in API 40+. I also have a JavaScript library that some of the components interact with, so I needed to upgrade all of my components at the same time, or risk some of them using a different window object.

I’ve made various attempts at this in the past, but always been defeated by weird errors that I was unable to isolate or reproduce. aura:if was often in the vicinity though, so it’s always my prime suspect. The last attempt was about 6 months ago and I’d created quite a few applications running on the latest API in that time, and I was hopeful that nth time is the charm, so I ran my script to update all of the meta-xml files to API 41 (there are 300 of them, so not really something that can be done manually) and deployed the app to my dev org. Here’s what I found.

Issues

Deployment Time

The first attempt failed with 19 errors, including the following:

  • Invalid Aura API - $A.util.json.encode. 
    This shows how long this application has been around - some of the very early examples in the Lightning Components Developers Guide etc used this method, but it’s been advised against for a while. I thought I’d cleared them all out, but had obviously missed one or two. This is a simple fix, just use JSON.stringify instead.

  • Invalid Aura API - $A.util.format.
    This is around replacing tokens such as {0} in strings/labels etc and can be replaced with the String.format standard JavaScript function, so rather than:

        $A.util.format(<string>, val);

    you would have

        <string>.replace(‘{0]’, val);

  • Invalid Aura API - this was thrown from a configuration item being passed to a JavaScript library function containing the text ‘onSelect’. I’m pretty sure that this was a false positive, but as this was something that I’d created an alternative pure Lightning version of, I don’t think I’ll be needing it going forward so sacked it off.

The biggest issue was : Invalid SecureWindow API, top was blacklisted. Even though this is a Lightning Experience application and uses LEX navigation, there’s still one place where I surface part of the application through Lightning Out, when creating a child object. Even though I’ve overridden the new action, this isn’t respected so I end up in the regular modal style new component.Thus I navigate to a Visualforce page instead and use Lightning Components for Visualforce to surface the contents. This is all fine until I want to get back to the man page for the app. I can’t use force:navigateTo methods by default, as these are only available in LEX, and if I set the window.location to a new value, that just changes the Visualforce iFrame embedded in the page. Thus I used window.top.location, as this changes the URL for the outermost window of the app.

There is a solution to this though - I can create my own handler for force:navigateToSObject event in the Visualforce page, and as there is no locer service in Visualforce I’m free to tinker with the outer window location to my heart’s content. Make sure that you add the dependency reference to the event to your Lightning app though, e.g.

    <aura:dependency resource="markup://force:navigateToSObject" type="EVENT"/>

I didn’t to begin with and spent a lot of time trying to figure out why it wasn’t working!

Run Time

I only hit two issues at runtime, and the biggest hurdle was actually getting the errors surfaced - I ended up taking a binary chop approach to find the problem component - commenting out half of the functionality at a time until i was able to narrow things down, then surrounding lots of code with try/catch exception handlers.

  1. Missing ‘var’ when using a variable, e.g.

        for (i=0; i<len; i++)

    Without the locker service, this will try to find a variable named ‘i’ against through the scope chain and, if it doesn’t find one, create it in the global scope. Almost certainly not what is required and in my cases definitely not. With the locker service, ES5 strict mode is enabled and this generates a reference error

  2. Getting non-existant attributes, e.g.

        var prop=cmp.get(‘prop’)

    Spot the problem? Missing the ‘v.’ namespace for the attribute, although in one case this was there but the attribute hadn’t been declared in the markup.

  3. Breaking encapsulation
    In this case, I was programmatically finding a component in the ‘ui’ namespace and changing an attribute. This is exactly what the locker service was created to stop, so no surprises there was an error. I’d completely forgotten it was there - it was a workaround to a bug with the standard select component where I couldn’t dynamically set the multi attribute based on one of my component attributes. It’s fixed now, and probably was years ago, so I just removed the offending code.

In my app, these are genuine bugs and it’s good to get them out of the way. Things obviously still work at the moment, but are fragile - in the first case, if there is an existing variable in the scope chain I’ll overwrite it’s value, which never ends well.

Conclusion

While I haven’t exhaustively tested every aspect of my app, the weird errors that I’ve seen in the past aren’t appear this time around. I’m sure that they weren’t all down to the locker service - I’ve fixed plenty of issues in my app over the years, but the locker service definitely made things more difficult to track down and proved to have plenty of gaps when used in anger. But for my purposes, it’s ready for prime time.

Related Posts

Saturday, 3 March 2018

Building my own Learning System - Part 3

Building my own Learning System - Part 3

NewImage

Introduction

In Part 1 of this blog series I covered the problem I was trying to solve (on-boarding/accrediting internal and external users with the same content, but without opening up all my content to everyone) and the data model to support this. Part 2 covered the user interface and a faker to allow me to check that my idea had legs without building the whole thing - if I’m going to fail, I like to get it over with as quickly as possible. This wasn’t the case though, so I then proceeded to build out the backend.

There are any number of ways to implement the backend, both on Salesforce and elsewhere. As I’ve created an Apex interface that the front end works against, any mechanism can be supported simply by creating a new implementation of the interface that knows how to talk tot the remote endpoint. For the purposed of my sample implementation I went with e REST endpoint surfaced via a Force.com site, as this means I don’t have to worry about authentication, can focus on the business problem and can easily use hurl.it to test. As to whether this is appropriate for a real training endpoint is a topic for discussion - it depends on how sensitive the information is and whether there is any issue with someone getting hold of it. I decide for each endpoint based on these and other factors.

REST API

Again in the interests of simplicity, I went with a REST API that exposes a single POST method that acts as a dispatcher. The body of the request contains the underlying “method” that should be invoked, and any parameters required for that method. While this might not please the purists, as I don’t have an endpoint per object, I didn’t want to have to create a new Apex class to implement each method that I added, especially for a sample implementation.

The API implements much the same interface as my faker, but with additional parameters to identify the candidate taking the training. As I’m not using authentication, I identify users by their email address. Note that this is about identification rather than authentication or authorisation, as anyone can choose any email address.

Client

The client makes use of a new implementation of the same API as the faker, that simply sends a request to the remote REST API. As I want to support multiple endpoints, these are configured via a custom metadata type in the client org. This means that a user has a number of training histories and badge totals, one per endpoint,

In my sample REST API, the training step details are retrieved from rich text area fields on sObjects. Any images that have been added via the rich text editor will be returned with a relative URL which will obviously point to nothing in the client org. To fix this, the custom metadata for an endpoint has a checkbox to indicate if images should be rewritten. If this is checked then the endpoint hostname is prepended to the image links, which works a treat.

As nothing is now stored in the client org, a user can install the training metadata components in any Salesforce org and, as long as they reuse the same email address that they’ve previously taken training with and configure the endpoints appropriately, they can pick up where they left off. This was a key feature for me as I really didn’t want to have to install the training paths along with the client software, as it makes things must more resistant to change. 

That’s it for part 3 - in part 4 I’ll go through the installation and configuration and, all things being equal, share the code too.

Related Posts

 

Sunday, 4 February 2018

Building My Own Learning System - Part 2

Building My Own Learning System - Part 2

Introduction

In Part 1 of this blog series I covered the problem I was trying to solve (on-boarding/accrediting internal and external users with the same content, but without opening up all my content to everyone) and the data model to support this.I also mentioned that this isn’t an attempt to rebuild Trailhead, and that is still the case.

In this post I’ll cover the user interface and the elements of the solution that allowed me to test it without having the build out the entire backend.

The “Design"

The first thing I did was to sketch out what I wanted a couple of the components to look like. The original sketches are below, and I think we can all agree that this communicates the entire concept of the look and feel I’m trying to achieve ;)

Screen Shot 2018 02 04 at 07 35 23 png

It did help me to think about how I wanted things to work though.

Single Page Application

I wanted a single page application (SPA) as I wasn’t going to have the sobjects in the same instance as the client, so lightning navigation between sobjects wasn’t an option. This does present a challenge with regard to bookmarking, but that is something I think I can do by making the SPA support URL parameters. The user might have to jump through a couple of extra hoops, but nothing too arduous, so I felt happy kicking that down the road to a later release.

The SPA consists of a central section and right hand sidebar,  The sidebar contains details of the current content endpoint and allows switching of endpoints, while the central section contains the actual learning content. The page is constructed from a number of lightning components and styled using the SLDS, as I want people to use it from inside Salesforce so it’s important that the styling is familiar.

Screen Shot 2018 02 04 at 08 04 50 png

Fake News

When I’m building an application of this nature, I’ll usually create a fake data provider so that I can get the UI flow without having to put a lot of effort into writing the actual server side implementation. Usually this is because I’m building it out in my spare time and it allows me to get something to throw stones at in place quickly.  As I’m looking at a distributed system in this case, it was even more useful as I didn’t have to create remote content endpoints and manage the integration with them. Instead I created the initial cut of the Apex interface that I want an endpoint to support and then wrote a faker implementation class that would return indicative but hardcoded responses.  This approach has the added benefit of allowing me to iterate on the interface without having to update multiple implementations of it.

Training Page

My training page is a lightning page with the training SPA added to it. Notice that I didn’t create a header aspect for my SPA - this is because a lightning page automatically adds a header that I can’t customise. The page thus has the lightning experience global header and the standard page header, so if I add a third one then most of the visible area is consumed. 

The SPA initially displays the available training paths from the fake service, laid out as a wrapping grid that shows 3 paths per row for the desktop and 1 per row on mobile:

Screen Shot 2018 02 04 at 08 39 08 png

Clicking on any of the paths shows the underlying steps that I need to complete:

Screen Shot 2018 02 04 at 08 39 20 png

and clicking into a step takes me to the actual content with any questions that have been defined, although for the demo step I’ve chosen here the fake service pretends I’ve completed it:

Screen Shot 2018 02 04 at 08 39 42 png

 

And that wraps up this post. In the next instalment I’ll cover the integration with a remote endpoint.

Related Posts

 

Friday, 26 January 2018

SFDX and the Metadata API Part 4 - VSCode Integration

SFDX and the Metadata API Part 4 - VSCode Integration

Introduction

In the previous instalments of this blog series I’ve shown how to deploy metadata, script the deployment to avoid manual polling and carry out destructive changes. All key tasks for any developer, but executed from the command line. On a day to day basis I, like just about any other developer in the Salesforce ecosystem, will spend large periods of the day working on code in an IDE. As it has Salesforce support (albeit still somewhat fledgling) I’ve switched over completely to the Microsoft VSCode IDE. The Salesforce extension does provide a mechanism to deploy local changes, but at the time of writing (Jan 2018) only to scratch orgs, so a custom solution is required to target other instances.

In the examples below I’m using the deploy.js Node script that I created in SFDX and the Metadata API Part 2 - Scripting as the starting point.

Sample Code

My sample class is so simple that I can’t think of anything to say about it, so here it is:

public with sharing class VSCTest1 {
    public VSCTest1() {
        Contact me;
    }
}

and the package.xml to deploy this:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
    <types>
        <members>*</members>
        <name>ApexClass</name>
    </types>
    <version>40.0</version>
</Package>

VSCode Terminal

VSCode has a nice built-in terminal in the lower panel, so the simplest and least integrated solution is to run my commands though this. It works, and I get my set of results, but it’s clunky.

Screen Shot 2018 01 24 at 17 41 26

VSCode Tasks

If I’m going to execute deployments from my IDE, what I’d really like is a way to start them from a menu or shortcut key combination. Luckily the designers of VSCode have foreseen this and have the concept of Tasks. Simply put, a Task is a way to configure VSCode with details of an external process that compiles, builds, tests etc. Once configured, the process will be available via the Task menu and can also be set up as the default build step. 

To configure a Task, select the Tasks -> Configure Tasks menu option and choose the Create tasks.json file from template option in the command bar dropdown:

Screen Shot 2018 01 24 at 07 31 04

Then select Others from the resulting menu of Task types;

Screen Shot 2018 01 24 at 07 31 57

This will generate a boilerplate tasks.json file with minimal information, which I then add details of my node deploy script to:

{
    "version": "2.0.0",
    "tasks": [
        {
            "label": “build",
            "type": "shell",
            "command": "node",
            "args":["deploy.js"]
        }
    ]
}

I then execute this via the Tasks -> Run Task menu, choosing ’build’ from the command bar dropdown and selecting 'Continue without scanning the task output'

This executes my build in the terminal window much like, but saves me having to remember and enter the command each time:

Screen Shot 2018 01 24 at 17 06 36

Sadly I can’t supply parameters to the command when executing it, so if I need to deploy to multiple orgs I need to create multiple entries in the tasks,json file, but for the purposes of this blog let’s imagine I’m living a very simple life and only ever work in a single org!

Capturing Errors

Executing my command from inside VSCode is the first part of an integrated experience, but I still have to check the output myself to figure out if there are any errors and which files they are located in. For that true developer experience I’d like feedback from the build stage to be immediately reflected in my code. To capture an error I first need to generate one, so I set my class up to fail

public with sharing class VSCTest1 {
    public VSCTest1() {
        Contact me;
        // this will fail
        me.do();
    }
}

VSCode Tasks can pick up errors, but it requires a bit more effort than simple configuration.

Tasks detect errors via ProblemMatchers - these take a regular expression to parse an error string produced by the command and extract useful information, such as the filename, line and column number and error message. 

While my deploy script has access to the error information, it’s in JSON format which the ProblemMatcher can’t process. Not a great problem though, as my node script can extract the errors from the JSON and output them in regexp friendly format. 

Short Diversion into the Node Script

As I’m using execFileSync to run the SFDX command from my deploy script, if the command returns a non-zero result, which SFDX does if there are failures on the deployment, it will throw an exception and halt the script. To get around this without having to resort to executing the command asynchronously and capturing the stdout, stderr etc, I simply send the error stream output to a file and catch the exception, if there is one. I then check the error output to see if it was a failure on deployment, in which case I just use that instead of the regular output stream or if it is a “real” exception, when I need to let the command fail. This is all handled by a single function that also turns the captured response into a JavaScript object:

function execHandleError(cmd, params) {
    try {
        var err=fs.openSync('/tmp/err.log', 'w');
        resultJSON=child_process.execFileSync(cmd, params, {stdio: ['pipe', 'pipe', err]});
        result=JSON.parse(resultJSON);
        fs.closeSync(err);
    }
    catch (e) {
        fs.closeSync(err);
        // the command returned non-zero - this may mean the metadata operation
        // failed, or there was an unrecoverable error
        // Is there an opening brace?
        var errMsg=''+fs.readFileSync('/tmp/err.log');
        var bracePos=errMsg.indexOf('{');
        if (-1!=bracePos) {
            resultJSON=errMsg.substring(bracePos);
            result=JSON.parse(resultJSON);
        }
        else {
            throw e;
        }
    }

    return result;
}

Once my deployment has finished, I check to see if it failed and if it did, extract the failures from the JSON response:

if ('Failed'===result.result.status) {
	if (result.result.details.componentFailures) {
		// handle if single or array of failures
		var failureDetails;
		if (Array.isArray(result.result.details.componentFailures)) {
			failureDetails=result.result.details.componentFailures;
		}
		else {
			failureDetails=[];
			failureDetails.push(result.result.details.componentFailures);
		}
          ...
        }
   ...
}

and then iterate the failures and output text versions of them.

for (var idx=0; idx<failureDetails.length; idx++) {
	var failure=failureDetails[idx];
	console.log('Error: ' + failure.fileName + 
		    ': Line ' + failure.lineNumber + 
	            ', col ' + failure.columnNumber + 
		    ' : '+ failure.problem);
}

Back in the Room

Rerunning the task shows an errors that occur:

Screen Shot 2018 01 24 at 17 34 42

I can then create my regular expression to extract information from the failure text - I used regular expressions 101 to create this. as it allows me to baby step my way through building the expression. Once I’ve got the regular expression down, I add the ProblemMatcher stanza to tasks.json:

"problemMatcher": {
    "owner": "BB Apex",
    "fileLocation": [
        "relative",
        "${workspaceFolder}"
    ],
    "pattern": {
        "regexp": "^Error: (.*): Line (\\d)+, col (\\d)+ : (.*)$",
        "file": 1,
        "line": 2,
        "column": 3,
        "message": 4
    }
}

Now when I rerun the deployment, the problems tab contains the details of the failures surfaced by the script:

Screen Shot 2018 01 24 at 17 46 10

and I can click on the error to be taken to the location in the offending file.

There’s a further wrinkle to this, in that lightning components report errors in a slightly different format - the row/column in the result is undefined, but if it is known it appears in the error message on the following line, e.g.

Error: src/aura/TakeAMoment/TakeAMomentHelper.js: Line undefined, col undefined : 0Ad80000000PTL3:8,2: ParseError at [row,col]:[9,2]
Message: The markup in the document following the root element must be well-formed.

This is no problem for my task, as the ProblemMatcher attribute can specify an array of elements, so I just add another one with an appropriate regular expression:

"problemMatcher": [ {
        "owner": "BB-apex",
        ...
    },
    {
        "owner": "BB-lc",
        "fileLocation": [
            "relative",
            "${workspaceFolder}"
        ],
        "pattern": [ {
            "regexp": "^error: (.*): Line undefined, col undefined : (.*): ParseError at \\[row,col\\]:\\[(\\d+),(\\d+)]$",
            "file": 1,
            "line": 3,
            "column": 4,
        },
        {
            "regexp":"^(.*$)",
            "message": 1
        } ]
    }],

Note that I also specify an array of patterns to match the first and second lines of the error output. If the error message was spread over 5 lines, I’d have 5 of them.

You can view the full deploy.js file at the following GIST, and the associated tasks.json.

Default Build Task

Once the tasks.json file is in place, you can set this up as the default build task by selecting the Tasks -> Configure Default Build Task menu option, and choosing Build from the command drop down menu. Thereafter, just use the keyboard shortcut to execute the default build.

Related Posts

 

Saturday, 13 January 2018

Building My Own Learning System - Part 1

Building My Own Learning System

Learn

Introduction

Before I get started on this post, I want to make one thing clear. This is not Trailhead. It’s not Bob Buzzard’s Trailhead. It’s not a clone or wannabe of Trailhead. While it would be fun to build a clone of Trailhead, all it would be is an intellectual exercise to see how close I could get. So that’s not what I did. I didn’t build my own Trailhead. Are we clear on that? Nor is it MyTrailhead, although it could be used in that way. But again, I’m not looking to clone an existing solution, even if it is still in pilot and likely to stay there for a couple of releases. I’m coming at this from a different angle, as will hopefully become clear from this and subsequent blog posts. Put the word Trailhead out of your mind.

All that said, I was always going to build my own training system. Pretty much every post I’ve written about Trailhead had a list of things I’d like to see, and I can only suppress the urge to write code in this space for so long. This might mean that I moderate my demands, realising how difficult things really are when you have to implement them rather than just think about them in abstract form.

The Problem

Trailhead solves the problem of teaching people about Salesforce at scale, with content that comes from the source and is updated with each release. MyTrailhead is about training/onboarding people into your organisation. The problem I was looking to solve was somewhat different, although closer to MyTrailhead. I wanted a way to onboard people from inside and outside my organisation onto a specific application or technology, but without sending everyone through the same process.

For example, regular readers of this blog or my medium posts will know that I run product development at BrightGen, and that we have a mature Full Force solution in BrightMedia. We also have a bunch of collateral and training material around BrightMedia that I’d like to surface to various groups of people:

  • Internal BrightGen sales team
  • Internal BrightGen developers
  • External customer users

I don’t particularly want a single training system, as this would mean giving external users access to internal systems. It’s also likely that I’ll have a bunch of training information that isn’t BrightMedia specific, and I don’t really want to colocate this with everything else.

Essentially what I’m looking for is a training client that can connect to multiple endpoints, each endpoint containing content specific to a product/application/team. That, and a way to limit who can access the content, allows me to colocate the content with the application, potentially in the packaging org that contains the application.

The First Stirrings of the Solution

Data Model

As the client won’t be accessing data from the same Salesforce org, or potentially any Salesforce org, my front end is backed by a custom apex class data model rather than sObjects:

Screen Shot 2018 01 13 at 18 12 00

I’ve deliberately chosen names that are different to Trailhead, because as we all know this isn’t Trailhead. I was very tempted to use insignia rather than badge, as I think that gives it a somewhat British feel, but in the end I decided that would confuse people. Each path has topics associated with it so that I can see how strong a candidate is in a particular field. The path and associated steps are essentially the learning template, while the candidate path/step tracks the progress of a candidate through the path. A path has a badge associated with it and once a candidate completes all steps in the path they are awarded the badge. The same(isn) data model as myriad training systems around the globe.

The records that back this data model live in the content endpoint. Thus the candidate doesn’t have a badge count per se, instead they have a badge count per functional area. In the BrightGen scenario they will have a badge count for BrightMedia, and a separate badge count for other product areas. The can also have multiple paths in progress striped across content endpoints.

User Interface

I created the front end to work against these custom classes as a single page application. As the user selected paths and steps the page would re-render itself to show the appropriate detail. I’m still tweaking this so I’ll cover the details in the next post in the series.

Show me the Code

I don’t plan to share any code in these posts until the series is complete, at which point I’ll open source the whole thing on github, mainly because it isn’t ready yet. I’m pretty sure I’ve got the concepts straight in my head, but the detail keeps changing as I think of different ways of doing things.