Using a Polyfill Service with SharePoint

If you’ve read many of my previous posts, you have probably seen me use polyfills (i.e. CRUD Operations for SharePoint Docs Using Fetch and REST), to patch older browsers with modern functionality like fetch. I generally download the polyfill, upload it to SharePoint, and load it on the page as a user custom action. But there is another way to load polyfills, which is generally called a polyfill service. The idea is that you load the polyfill from some external service, which detects your current browser, and loads just enough polyfill to patch your current browser up to some level of specification compatibility (usually like ES5, but you can also generally ask for specific functionality, like fetch and/or Promise). There are some unique problems with loading this kind of polyfill in SharePoint, mostly due to limitations in user custom actions. In this post I’m going to talk about how to load such a polyfill in SharePoint, but first lets talk a little more about polyfill services in general.

So what is a Polyfill Service, anyway?

It all may sound a bit abstract, but a concrete example will make it pretty clear pretty quickly. I’m going to
use a service called Polyfill.io. To load this service, I can add a
script reference to it, like so:

Want to see what this does? Just open a new tab in your browser and navigate to the URL https://cdn.polyfill.io/v3/polyfill.js. Now your results are quite possibly
going to be different than mine, but when I do this right now, I get:

And if I switch the URL to https://cdn.polyfill.io/v3/polyfill.min.js, I get:

And if I switch the URL to https://cdn.polyfill.io/v3/polyfill.min.js, I get:

Admittedly, neither of these results are very exciting. That’s because I’m typing this blog post on a reasonably
modern browser, that doesn’t need any patching in order to behave like a reasonably modern browser. On the other
hand, if I open the non-minimized version in Internet Explorer 11, I get:

What I’m showing here is actually only the comment at the top of the file. As indicated by the comment I added at
the bottom, I’ve cut out about 86 pages worth of JavaScript from the file, because of course it would be absurd
to show you that in a blog post. But the comment itself is pretty interesting, because it lists all of the
functionality that it’s going to polyfill in order to make IE 11 act like a modern browser with regards to
JavaScript. If I loaded this in IE 9 (or IE 9 compatibility mode), it would have patched even more.

So that’s the basic idea of polyfill services. Detect your browser and patch only what’s needed to level set what
sort of JavaScript will work in your browser…pretty cool.

There is a bit more to it, in that you can also designate specific functionality to patch, instead of patching
everything that’s missing even though you’re probably only going to be using a subset of that. This allows you
to reduce the size of the download, even on older browsers. But the download is actually pretty reasonable and
fast (the above polyfill for IE11 is only 46K minimized).

One other quick point is that you can’t necessarily polyfill everything needed to bring a browser up to some spec
compatibility. For instance:

  • If it’s a new object or method, you can probably polyfill it. Examples include CustomEvent and
    Array.prototype.filter.
  • If it’s a new syntax, it can probably be achieved by transpiling, ala Babel or TypeScript. Examples include
    Class and fat arrow functions.
  • If it depends on native implementation in the browser, not available in older browsers, neither polyfills
    nor transpiling wil help you. Examples include Service Worker and Bluetooth API. In these cases, you just
    need to use a modern browser that supports them or gracefully degrade in an older browser.

Now, What’s the Problem with Polyfill Services and SharePoint?

First, if you just want to use the thing for a single customization on a single page, then you can stick the
script link in an HTML snippet in a Content Editor Web Part, add your own customization below it, and voila…no
problem.

But most of the customization I do for SharePoint, even client side, is more of an enterprise feature and needs
to be loaded on every page. For that, I usually use something like user custom actions, to load my dependencies
and my custom code on each page in the site collection. But what happens when you create a user custom action
with a full URL pointing to a script that is someplace other than the current site collection?

If you said something like “programmer no doughnut,” or it breaks your site pretty badly, you’re absolutely
right. Try to load an external resource in a user custom action, and every page in the site will now load as a
blank, or white, page, until you figure out how to remove that user custom action. That’s the reason why in
previous posts, I didn’t use a service, I downloaded the polyfill locally. The downside is, that this means I’m
either loading the polyfill for every browser, or I have to decide if the current browser needs it because I’m
now not using the service. That was ok for fetch, as I explained in my previous post, because even browsers that
implement it didn’t implement enough of it to work for me, but normally loading a polyfill to override something
that is actually already implemented in the current browser is a no-no.

It is, of course, possible to load an external resource using a user custom action without breaking your site,
but it takes a bit more work. This technique works for loading anything from a Content Delivery Network (CDN) as
a user custom action.

The Fix!

The fix is pretty simple, but subtly tricky. Cutting to the chase, here is the source for the JavaScript file I’m
going to load as a user custom action:

Now any script that I need to load that requires this polyfill needs to be in a separate script file, which needs
to be loaded after the above script (i.e. as a user custom action with a higher sequence number).

Why does this work? Normally, browsers that support the async attribute (all modern browsers) for scripts will
load and execute dynamically added scripts (i.e. added through DOM manipulation or document.write) as quickly as
possible, which means the second script could be executed before the first has been loaded. But, with the async
flag set to false, the dynamic script will pause further rendering until it has executed completely, which means
the second script can count on the first script being ready for use from the start.

And since browsers that don’t support async are synchronous by default, this should work for older browsers too
(with a few sad caveats).

The caveats are that due to bugs or partial implementation of async, the order of execution of these scripts is
not guaranteed in IE 6-9 or Safari 5.0 (see async attribute for
external scripts
), but it should work reliably in all reasonably modern browsers. And if I really need
to support IE9, I could use a third party library like RequireJS or HeadJS to load my dependencies reliably regardless of browser
version. Either way, the trick is:

  1. Load a site collection local file as a user custom action.
  2. That file dynamically loads the external resource in a synchronous fashion.
  3. Load other code that depends on the external resource in a separate file, and later than the the first file
    in the page life cycle (usually this means as a user custom action with a higher sequence number).

References

Leave a Comment