Tuesday, August 13, 2013

What's wrong with extending the DOM now?

Before you read this post, you should definitely read this post, in which kangax skillfully explains the risks and pitfalls of extending "host" objects, particularly as experienced by the Prototype library. It both proceeds and concludes with strong warnings against the practice, allowing for it only in careful polyfills and specific, "controlled" environments.
It was good advice, but things have changed a great deal in the last three years. The DOM is no longer the dark and dangerous world of cross-browser challenges that it was. It's time to start re-evaluating the risks and benefits:

Past Risks Of Extending The DOM

Lack of specification (prototype exposure not guaranteed)

Guarantees and specifications are important. But the reality is that prototype exposure for the DOM has become remarkably consistent and ubiquitous in modern browsers. As long as you are willing to sacrifice the rapidly fading market share of IE < 9 and the like, there is little to fear here.

Host objects have no rules (unpredictable interactions)

This argument was always weak and gets weaker as browsers (especially IE) rapidly improve, because even when they weren't improved, those creating wrapper libraries also had to watch for this. To say, "extending DOM objects is kind of like walking in a minefield" is to say the same about directly using the DOM, whether or not you extend it. In any case, all but one of the examples are for IE, presumably older versions. The remaining example (overriding target on events) is hardly difficult to avoid. I'm not even sure why you'd want to do that.
The bottom line is that these particular minefields have grown dramatically less dangerous to navigate as vendors have been working to clear them and everyone learns the trouble spots. It never hurts to step lightly when you leave the well-trod paths, but by all means, it's time to relax a little here.

Chance of collisions (hard to scale APIs safely)

IE8 and earlier conflated element properties and attributes, as IE8 disappears, so do the problems this raised. However, the browser is admittedly becoming a richer environment, more features, more libraries, more API surface occupied. But the expansion is not even; some notably different naming patterns have evolved. Specifically, those developing the DOM tend to choose names quite unlike those developing with the DOM.
The querySelectorAll function is probably the perfect example of the difference in naming philosophy. In most JavaScript frameworks, this is wrapped to be something that values conciseness or readability over technical descriptiveness. We would use something like find, a name that i'd wager is extremely unlikely to ever conflict with future DOM specifications. Similar safe names are usually easy to find, and future DOM features are not often hard to avoid. Of course, technical descriptiveness and readable conciseness most certainly do overlap, but when they do, it almost always means that the DOM feature is what you need (or close enough). Just limit yourself to a conditional polyfill in such cases. I won't deny that it can be a bit of a dance, but it is a very easy one to learn.
The other conflict area is with named forms and named fields therein. Named forms are automatically exposed on the document object. This is, in my opinion, still a very good reason to avoid extending the document object itself, at least not with any single-word name, as those are more likely to be form names. Named form fields being made available on the form element itself is a much more difficult challenge. However, this is no more a problem for DOM extenders than it is for wrappers. Try calling hide() on a jQuery-wrapped <form><input name="style"> </form> and you'll see what i mean. No matter what approach you use, you must always take special care and consideration when interacting with a <form> element. Here there be dragons!

Performance overhead (manual extension doesn't scale)

Since we are addressing the risk for modern browsers and kangax's discussion of manual extension presumes the technique to be a workaround for older browsers with poor DOM prototype exposure, it is tempting to dismiss this out of hand. However, manual extension is still a legitimate technique for any library trying to "step lightly" and not employ the blanket coverage provided by prototype extension. The good news here is that when you are stepping lightly, you are extremely unlikely to be extending so much or so often that you need to worry about performance. The modern DOM needs far fewer extensions than it used to, especially if you try to restrain your extensions to powerful utilities instead of clusters of simplistic aliases.

IE DOM is a mess (memory leaks and attr/prop confusion)

The conflating of properties and attributes stopped after IE8. And the famous memory leaks were largely resolved earlier than that. Moving on...

Browser bugs (known flaws in IE8 and older)

Bugs will always be there for those stuck in the past, but in the present, they are closely hounded Wrappers and extenders alike ought still keep a sharp eye out, especially in bleeding-edge features, but the core DOM APIs are now regularly battle-tested by hordes of barbarian code all over the planet.

So What Risk Is Left?

Not too much. I believe the list of risks particular to DOM extenders has been reduced to these very manageable risks:
  • New code being opened in old browsers<
  • Name conflicts with future/proprietary DOM enhancements<
  • Name conflicts with named forms<
Dealing with older browsers in this case means not falling back to increasingly desperate or risky measures but directing them to upgrade instructions or a reduced-functionality version. Well-established techniques abound: conditional comments, Modernizr, even server side response changes are legitimate.
Dealing with future/proprietary enhancements mostly just requires checking before extending the prototypes or the elements themselves. Easy peasy, just remember to do if (!('name' in object)) instead of if (object.name). And you may be surprised by how thoroughly you can avoid trouble when extending objects, if you feel so inclined.
Dealing with named forms means you avoid extending the document object, and when you break that rule, never use any name a form might have (i.e. foo() is dangerous, getFoo() is less so).

Benefits of Extending The DOM

Rich API That's Getting Richer

The modern DOM is a rich API, as capable as its wrappers, if not as concise. And it's getting richer all the time. Those writing wrappers rely on plugin authors to keep up with new features being added to the DOM, or simply leave it to application developers to unwrap the DOM when they need access. Extending the DOM gives you everything the DOM has to offer today, no waiting for new releases or new plugins. Yes, it doesn't protect you from functions that lack cross-browser success yet, but the need for this diminishes every month as more and more users switch to evergreen browsers. Where features are missing, polyfills tend to appear in the wilds of GitHub much sooner than they do in wrappers like jQuery.

Educating Developers

When application developers use jQuery, it is rare that they learn anything but jQuery. They tend to think in jQuery and put all their plugins on the $ whether they need to be or not. They rarely learn to appreciate the wealth of knowledge out there that is not at api.jquery.com. While they're working actively on the DOM, they know very little about it. Libraries that extend the DOM enhance it instead of hiding it away, enhancing developer's familiarity with the DOM. More developers knowing the DOM, means more developers ready and able to fix bugs, innovate new features, and generally push things forward in the browser environment.

Trim The Fat

Wrapper APIs can be fairly small, zepto sneaks under 10Kb when you minify and gzip it. But a part of that 10Kb is devoted to just renaming the DOM interface, rather than adding features. Any way your slice it, wrapping one API with a completely alternate API comes at a cost. That cost is small when the API being wrapped is flawed and inconsistent, but as the flaws and inconsistencies get ironed out, the cost becomes ever less worthwhile. The bandwidth headaches of the burgeoning mobile world only make the extra weight more of an issue.


Best. DOM Ecosystem. Ever.

The "evergreen browser" is ascendant and their market share is wonderfully fractured, with IE 6/7/8 plunging to joyful depths. I can't recall the vendors ever being so motivated to concern themselves with compatibility. And where they do fail in that, those failings are often patchable with polyfills and shims or even some transpiler magic. This is the age of plentiful answers on stackoverflow. This is the age of simple patching and forking on GitHub. Our client side developer tools, editors and command line have never been so capable. Package managers are crowded with assets ready to be installed. It is time to stop being afraid of handling the DOM.

Always Room For Improvement

The DOM is not always pleasant to use. It is not always concise or convenient. There remains a healthy need for libraries to improve upon it. But as IE8 fades into blessed oblivion, it is time to acknowledge that much of what was wrong with extending the DOM in the past has faded also. When you get an urge to smooth those rough spots, add features, and maybe even offer some more concise interface for the DOM, don't be afraid to consider doing it directly, without a wrapper. Because a lot has changed in the last four years, and it only looks to be getting better.

Two Valid, Incompatible Approaches

The prototypes are exposed and you can extend them. This should outperform wrappers and manual extensions, but does raise the risk of conflicts somewhat. Don't do this until you know the DOM (it's present variations and future direction) well enough to choose names that work well to avoid conflicts.
If you are more conflict-shy than performance-focused, you can step lightly and create libraries that only extend when and where they are used and no more, preferable with a small number of additional properties too. Such libraries would not be worth the penalty for DOM-intensive applications but may be able to achieve features prototype-based DOM extenders never could.
I see great value in both techniques, but for different situations. It would probably not even be difficult for some libraries to create similar, parallel versions for the different use cases. Mixing the two in a single implementation does not (for the moment) appear to be worthwhile.

The Deciding Factor

Can you afford to deny (full) support to IE8 users? Windows XP stops getting security updates in a matter of months, and it's approaching demise is helping to purge IE8 from the ecosystem. But that may take a while, and meanwhile, IE8 still carries just under 10% of the market. Whether that number is high enough to avoid DOM extenders is not a question with a single answer. Google has cut off IE8 support already and their next platform won't even support IE9, while some businesses still demand IE6 support for web applications. Do you want to press the web forward or pander to stubborn legacy users? Every choice is a trade off.
For myself, i am looking toward the future. I won't rule out supporting some number of features on older browsers, but i will not use them as a starting point or baseline. It is far more important that my applications work well in the exploding mobile device market than the shrinking IE8 market. Limited resources dictate prioritizing growth over legacy. The extra bytes that come with the one-version-fits-all approach are not appreciated by mobile users, especially international ones. So, those with antiquated browsers will get the browsehappy.com treatment until sufficient resources are available to deliver a functional version specifically for them. I may not always design UX mobile first, but it now seems obvious to approach platform decisions from that perspective.

Monday, July 29, 2013

Time To Go Native With HTML(.js)

In the last few months, i have been drifting away from jQuery, slowly, quietly. Removing dependencies on it in my "wheels", getting my brain down a level and into the native DOM. A few weeks ago, there came a watershed moment; i learned of Voyeur.js and inspiration struck. I realized that jQuery might just be in my way...

jQuery is a wrapper, a completely alternate API to the native DOM. It's a great API, no question, but it has begun to feel a bit unnecessary. The native DOM, after all, is far richer and less troublesome than it ever was back when i first fell in love with jQuery. Voyeur, by contrast, wraps nothing. It simply enhanced the DOM, a little here, a little there. It gave a tasteful sprinkling of ES5 sugar to a C++-style interface that happens to be exposed to a JavaScript environment. Sure, the dot-traversal of nodes by tag names using lazy getters was a delightful hack, but that wasn't what grabbed me. The inspiration was the idea that a few little additions to the DOM might be all it would take to make it feel comfortable to "go native".

But, Voyeur seemed centered on the dot-traversal of the document body. The API it gave offered no way to work on the document head, and the .create and use() features felt a bit unrefined, even awkward. Hardest of all for me to accept was that the code was not ripe for hacking, for extending, for enriching. So, i reinvented it and turned it into a JavaScript library called HTML.

I know what you're thinking. "HTML"? Seriously? The name is taken; it's un-Google-able; it lacks the flavor and pizazz of "Voyeur".  But it has two things i love; two things i cherish:
  1. It reads right in my code. The library starts on the root element <html> instead of <body> (like Voyeur). It is the root element. And it is all about HTMLElements. There is no name more readable and self-descriptive that i could give it.
  2. It constrains my code. If i named it something fancy like "Pizazz", it could become anything. I don't want that. A functional name centers and focuses development on a single purpose.
But enough about that; you get used to the name quickly. The code and what it enables are what excite me. Check out the demo and the API to get some hints at what you can accomplish. Pay close attention to what the each() function can do. It's the heart of the whole "befriend the DOM" motto for the project, as it makes it oh-so-easy to get/set/call DOM properties and functions on one or more elements. Unless you get crazy with the field aliases, the property syntax is composed of native DOM API, so their documentation is HTML's documentation.  It keeps things lightweight, consistent and future-friendly, as browser makers push new features.

Considering the ease of working with HTML thus far, i think my days of using jQuery are numbered. Right now, HTML works in all browsers IE9 and better.  Even the quickly diminishing group of IE8 users can be supported by conditionally including things like es5-shim/es5-sham and this lovely polyfill. I should, of course, point out that the dot-traversal portion of HTML is not polyfillable at the present, but everything can be made to work.  I think for new developers who are not required to prioritize antiquated versions of IE, including jQuery's large alternative API will very soon no longer be worth the KB.  And i am hoping that this little (2KB) HTML(.js) will help make that transition easy.

Yes, my fellow developers, the native DOM can be your friend!

Tuesday, June 11, 2013


I've been unhappy with the default templates available for grunt-init.  None of them were a good fit for a "vanilla" javascript library like this.  So, of course, it was time to strike out and make a template that did work for me:


This serves my needs much better and was a good further exercise developing with and on Grunt.  I'm pleased by how much Grunt can do, but creating a template did feel more difficult than i'd expected, even with the available examples.  Much of this comes from a lack of documentation and some surprising results (like the behavior of init.writePackageJSON).

If i had all the time in the world, i'd be tempted to make a grunt-init-init template for creating templates. But that's not gonna happen.  Odds are better that i'll be forking grunt and/or grunt-init and submitting pull requests one of these days.

Just In Case

In my continuing experiments with Grunt, i decided to reinvent a wheel with Grunt from the start, instead of adapting an existing javascript library to a Grunt build.  For no particular reason, i wrote a library to identify and convert the case of strings.  Here's the result:


It handles all the usual suspects and is easily extend to handle new ones.  Not a bad day's work.

Tuesday, May 28, 2013

A Pox On Both Your Package Managers!

I love the package managers available for web developers.  It is such a relief from the pain of copying scripts around a few years ago.  But they also have me quite frustrated now that i am trying to publish some packages i've developed for ESHA Research.  Basically, finding available names to publish under now requires a thesaurus or bad conventions like version numbers in names or ".js" suffixes.

Bower and NPM, all of your packages should have forced producers to use namespaces by default from the start.

Shouldn't this have been obvious?  If you want your package manager to succeed, you need a lot of packages.  If you want a lot of packages, you need a lot of available package names.  People want to use names that are easy to remember and to discover.  There are not many of those that pertain to javascript or web development.  And you can only get so creative with synonyms and clever abbreviations before they become hard to remember and discover.  Therefore, no namespace means fewer packages or packages with worse names.

Namespaces not only create more space for names that are simple and preferred. They create space for reputation development.  People learn to recognize quality sources.  It's good for creators to have their own brand highlighted.  It's good for users to have confidence in the creators.

There are workarounds.  You can prefix all of your package names.  But this doesn't have quite the same effect.  Calling your package "company-package" is a functional solution, but it comes with a price.  It lacks the aesthetic appeal of "package".  Users will gravitate toward the one that appears simpler, cleaner, and lamest of all, quicker to type.  This is not good for the ecosystem.  It can both discourage competitors and promote inferior packages.  Only forcing namespaces upon everyone (like GitHub does) can alleviate such counter-productive biases.

And when they finally come around and add a namespace field to package.json or bower.json, it will have the same problem as the workarounds if they do not force it to be present or automatically fill it in, either from GitHub (home to vast numbers of package sources) or by "doubling" (e.g. package/package).  In fact, the latter is probably the best option.  It makes backward compatibility easy and reverses the advantage early name claimers had by dating them.

However it is done, this change must come.  I've been working in open ecosystems long enough to have seen mistakes like this and the inevitable correction before (Maven).  I just hope it comes sooner than later.  And in the meantime, as i push more packages from our company out there i will probably have to suffer with an "esha-" prefix for the rest of them.  I've already run through a fair bit of the thesaurus for a few of them, with little luck in finding acceptable names.


Update: Isaac Shlueter was kind enough to point me to a discussion of NPM's global namespacing on their mailing list.  It seems to me now that i have been mistaken to see NPM as a general javascript package repository.  It works as a general package manager, especially with the ability to install straight from GitHub, but as a repository they value discoverability and clarity over simplicity and competition. This limits their usefulness in client development but has, they feel, enhanced it for NodeJS users.  I still suspect this will some day change due to the friction of their intentional scarcity, but we'll see.

Friday, May 17, 2013

Trigger 1.0.0 - Application Events Go Native

Ok, i've written about the trigger library before, but a lot has changed since then. jQuery has gone modular and IE8 has drifted out of my development priorities making native wheels a reality.  In response, trigger.js has changed as well.   It's reached version 1.0.0 and staked out a place on GitHub, NPM and Bower.  The official documentation can now be enjoyed here.

Here's an overview of what's changed is the last year:

  • What's New:
    • Native support - You don't need jQuery, but if you do use jQuery everything still just works.
    • Better sequence control - e.stopSequence(), e.resumeSequence() and e.isSequenceStopped()
    • Async sequences - Use e.stopSequence(promise) and when the promise resolves, the sequence resumes.
    • Event categories - If event type is a verb, constants (formerly "data") are the object, and tags are the adjectives, event.category is the subject that was missing from the grammar.
    • Triggers besides click/enter - You can now declare application events to be triggered by any native (or custom) event via calls like trigger.add('dblclick');
    • Special event extensions - You can set up special handlers for particular event types.
    • Grunt build
    • QUnit test suite
  • What's Different:
    • tags use # instead of : - the colon was stolen and the tags got a more standard syntax
    • event.data is now event.constants - better describes the hard-coded nature of the property
    • trigger="foo" is now click="foo" - more triggers required a more flexible attribute declaration
  • What's Gone:
    • The $.fn.trigger wrapper. With jQuery optional and declarative triggering being the recommended pattern, the byte tax for the manual shortcut was deemed not-worth-it. It's a tiny little jquery.trigger.js extension now.
    • IE 6,7,8 support. These don't support custom application events, so you'll need both jQuery and the tiny trigger.old.js extension to make things work in older IE versions.  Oh, and IE 6 and 7 require a JSON polyfill as well.
    • e.preventDefault() to stop event sequences. The overlap of meaning was confusing, caused problems in IE9, and couldn't support promises. Use e.stopSequence()!
    • jQuery event data == e.data.  e.data became e.constants and application event listeners no longer receive data as additional parameters. Again, the overlap of meaning was a problem and the switch to native events offered no way to pass listeners extra parameters anyway. Support for this is under consideration for jquery.trigger.js, but it would mean triggered event sequences could not be heard outside jQuery's event system.

Tuesday, April 2, 2013

A Better Storage API

Modern browsers have these very handy shortcuts to persistence called localStorage and sessionStorage.  While they're not recommended for everything due to some feature shortage, they are extremely useful for a modern webapp developer.  They are far better than cookies.

There's just one problem: the API is still really lame. Yes, it's a big jump up from cookies, but that's not saying much.  It's verbose, only works with strings and is decidedly lacking in features and convenience.  This is exactly the kind of wheel that i cannot resist reinventing.

I actually wrote store.js well over two years ago, in its first incarnation.  It has come a long way since then and has been well used here at ESHA Research.  However, it has never yet seen the "light of day" out there in open source.  Now, you can fork it on GitHub!

store.js handles JSON transparently, provides a very rich API, and is even extensible (with a few nice extensions and a few crazy ones already available in the repo). It supports both concise and explicit function calls to suit your needs (e.g. store('foo') === store.get('foo')).  It even supports namespacing your data with ease.

Check out the README on the github repo for documentation.  I think you'll like it. :)