Links I’ve encountered #5

It has been a while since I last posted on the site. I have been busy with moving plans and holding onto several gigs at a time.

From Callback to Future -> Functor -> Monad – Documenting the movement from callback based JS on towards a more Future based setup employing functional programming techniques.

JavaScript Journey – A very interesting repo that takes on solving one JS problem in as many different ways as possible. From simple procedural for...in loops all the way to reactive and even asm.js. Full blog post about it

Why mixins are harmful – I have only ever employed mixins when composing objects for passing state around. However, I would not advise on their usage in OOP style especially in prototypal JS. Raganwald goes farther in exploring how you can avoid certain caveats when using mixins, especially with ES6 modules.

Advertisements

Everything is an Object…or not!

If it is one thing that grinds my gears, is running into people who mistake prototypal inheritance for classical inheritance. Even worse, is when they argue for classical inheritance patterns without any willingness to admit they lack some prototypal knowledge. It does not help that we now have the class keyword in JS. This is not about what is superior and/or inferior but rather a distinction of the two. I will also try to track down the earliest ancestor to all JS objects.

TL;DR:

In case you do not want to read the entire post, then I will summarize it as follows: Classical inheritance is like building houses using a blueprint, while prototypal inheritance is building an apartment building, focusing on what should be shared communally. They both have their caveats, like how do you improve upon the house/flat in each instance? Do you first edit the blueprint in classical inheritance? Do you edit the new flat on the fly in prototypal inheritance or do you just add it to the apartment building for all to share? There are no obvious answers to these questions but understanding this difference could help you structure things much more efficiently.

class vs Object.create vs new:

For the uninitiated, each of the keywords above do the exact same thing. class is a nicer way to writing constructor functions (i.e new) in ES2015. Object.create is an easier (cleaner?) way to creating objects introduced in ES5. For clarity’s sake, here is some code written using a constructor function and new:


//Constructor function using 'new'
var Animal = function(name, sound){
    this.name = name;
    this.sound = sound;
}

Animal.prototype.makeSound = function(){
    console.log(this.sound);
}

var cat = new Animal("Tiger", "Meow");

cat.makeSound() //"Meow"

This is a trivial example but I wanted to highlight the usage of prototype (it’s in the name!). makeSound function is specifically assigned to objects prototype. If we have a behaviour that is not unique to one object, we can delegate it to a single instance. Whenever we need it, we can quickly borrow the method/property. How does an object borrow the properties? By having a link to its previous prototype through a hidden __proto__ or sometimes referred to as [[Prototype]] property.


//From the constructor function example

cat.__proto__ === Animal.prototype //true

//THIS IS NOT A RECOMMENDED BEHAVIOUR. NEVER REFER TO AN OBJECT's __proto__ PROPERTY DIRECTLY!!!
//IT IS HIDDEN FOR A REASON

Ancestry.com for Objects:

This is where it gets interesting. If this prototype behaviour is common in every single object, there must be some object where everything points to, right? Existentialism aside, this is absolutely correct. All* objects keep inheriting properties through the hidden __proto__ all the way up. If the cat object does not have a makeSound property, it checks up on its __proto__ object and uses that method. If it is not there, it checks a level deeper (i.e: __proto__ of its own __proto__). You can make this trip yourself by referring to the __proto__‘s until you get undefined but it will not be clear where exactly this object resides. Instead, lets use deductive and inductive reasoning.

I will go with the assumption that everything in JS is an object. With the exception of primitives, we can check for the prototype chain using the instanceof operator. However, instanceof only checks whether a function is a constructor for the object in question. In essence, if all constructor functions are objects (see earlier assumption) – they must all inherit from the Function object and/or function (?). To check for this:


    Animal instanceof Function //true

OK, this is getting somewhere. From here onwards, it will get confusing but please bear with me. Function object is actually just a normal JS function that all functions inherit from. So, it must also have a prototype property, from which all functions inherit. It will also have the usual __proto__ hidden property because everything is an object (if the assumption still holds). Because functions are also objects, they can also be a part of the __proto__ chain. As strange as it sounds, a function could play both the parts of a prototype and that of a hidden __proto__. Which is exactly what is happening with Function.


Function.__proto__ === Function.prototype //true

From here onwards, we check out of functions and move on to objects (we’ve closed the loop, so to speak). Function.prototype, the key to all functions, has no prototype property (it’s set to undefined but that’s not strange – you can do that to any function). It only has a __proto__ property, meaning that we still have a chain to follow. It seems to point back to the Object function. This is the natively available function that constructs empty objects. Seeing that Object is actually a constructor function, it should also have a prototype property. This property will presumably act as part of the __proto__ chain for all instances of the Object function. Turns out, Object.prototype is actually just a plain object. Surprisingly it does not have a __proto__. This is a first for an object in JavaScript. Maybe, just maybe, we have found the single source of truth for all JS objects. There are a few checks to verify this.


Object.prototype instanceof Object //false. This means it's the only object not to do so
Object.prototype.isPrototypeOf(Function) //true
Object.prototype.isPrototypeOf(Object) //true
Object.prototype.isPrototypeOf({})  //true. Looks like Object literals inherit from it too!

So, at this point I think it is fair to say that all JS objects*, through the __proto__ chain, point to Object.prototype. Feel free to disprove this! Bonus image below for visual learners.


© Professional JavaScript for Web Developers, 3rd Edition, Nicholas C. Zakas, ISBN: 978-1-118-22219-5

* : Someone rightfully pointed out that Object.create(null) has no __proto__, essentially removing it from the prototype chain. This is the equivalent of creating an object and setting the __proto__ to null.

Links I’ve encountered #4

Merry Christmas!! After a family filled day, I decided to sneak in a couple of interesting articles I’ve read in the past week or so.

How the Grinch stole array.length access – It hits a very sensitive nerve. Caching array.length is one of those micro-optimizations that gets spread around like an urban legend; in that it sounds reasonable enough but very few actually go out to prove whether they actually work. Good thing to note is that the research done is purely on a V8 engine.

Big Data, Machine Learning, and the Social Sciences: Fairness, Accountability, and Transparency – A while back I watched Hanna Wallach’s talk on some text analysis and she intrigued me in the applications of ML techniques in the real world. This article goes in great (and better) detail in explaining what I was trying to convey a while back.

One Size Fits Some: An Examination of Responsive Image Solutions

This is an article I wrote for toptal’s blog. After getting permission from them, I decided to post it here too. It’s an observation on the latest responsive images solutions, coupled with ideas that might help making a decision on the correct solution to use.

As mobile and tablet devices come closer to achieving final world domination, web technology is in a race to accommodate the ever-growing number of screen sizes. However, devising tools to meet the challenges of this phenomenon brings a whole new set of problems, with one of the latest buzzwords to emerge being “responsive web”. This is the challenge of making the web work in most, if not all, devices without degrading the user’s experience. Instead of designing content to fit desktop or laptops, information has to be available for mobile phones, tablets or any surface connected to the web. However, this evolution has proven to be a difficult and sometimes painful one.

While it can be almost trivial to accommodate textual information, the tricky part comes when we put into consideration content like images, infographics, videos, an so forth, which were once designed with only desktops in mind. This not only brings up the question of displaying the content correctly, but also how the content itself is consumed using different devices. Users on smart phones are different to users on desktops. Things like data plans and processing speed have to be considered as well. Google has started to highlight mobile-friendly sites on its search results, with some speculating that it will lead to a substantial pagerank boost to such sites. Earlier solutions addressed this by deploying mobile-only subdomains (and redirects) but this increased complexity and fell out of fashion quickly. (Not every site has the ability to afford this route.)

On the Quest for Responsive Images

At this point, developers and designers have to ensure their website load is optimized to meet the users who are on mobile sites. Over 20% of web traffic is now mobile users, and the number is still rising. With images taking among the largest shares of page content data, it becomes a priority to reduce this load. Several attempts have been made, both server-side to front-end solutions. To discuss these solutions, we need to first understand the current image linking shortcomings.

The <img> tag has only the source attribute linking directly to the image itself. It has no way of determining the correct type of image needed without any add-ons.

Can’t we just have all the image sizes included in the HTML, and use CSS rules to display:none for all but the correct image? That is the most logical solution in a perfect logical world. This way the browser can ignore all the images not displayed and, in theory, not download them. However, browsers are optimized beyond common logic. To make the page render fast enough, the browser pre-fetches linked content before even the necessary stylesheets and JavaScript files are fully loaded. Instead of ignoring the large images intended for desktops, we end up having downloaded all images and resulting in an even larger page load. The CSS-only technique only works for images intended as background images because these can be set within the stylesheet (using media queries).

So, what’s a website to do?

Back-End Solutions

Barring mobile-only sites/sub-domains, we are left with sniffing user-agent (UA) and using it to serve the correct images back to the user. However, any developer can attest how unreliable this solution can be. New UA strings keep popping up all the time, making it strenuous to maintain and update a comprehensive list. And of course, this doesn’t even take into account the unreliability of easily-spoofed UA strings in the first place.

Adaptive Images

However, some server-side solutions are worthy of consideration. Adaptive Images is a great solution for those preferring a back-end fix. It does not require any special markup but instead uses a small JavaScript file and does most of the heavy work in its back-end file. It uses a PHP and nginx configured server. It also does not rely on any UA sniffing but instead checks for the screen width. Adaptive Images works great for scaling down images, but it’s also handy for when large images need art direction, i.e. image reduction by techniques such as cropping and rotation – not merely scaling.

Sencha Touch

Sencha Touch is another back-end solution, although it is better to refer to it as a third-party solution. Sencha Touch will resize the image accordingly by sniffing the UA. Below is a basic example of how the service works:

<img src="http://src.sencha.io/http://example.com/images/kitty_cat.jpg" alt="My Kitty Cat">

There is also an option to specify the image sizes, in case we do not want Sencha to resize the image automatically.

At the end of the day, server-side (and third party) solutions require resources to process the request before sending the correct image back. This takes precious time and in turn slows down the request-response trip. A better solution might be if the device itself determined which resources to request directly, and the server responding accordingly.

Front-End Solutions

In recent times, there have been great improvements on the browser side to address responsive images.

The <picture> element has been introduced and approved in the HTML5 specification by the W3C. It is currently not widely available on all browsers but it will not be long before it is natively available. Until then, we rely on JavaScript polyfills for the element. Polyfills are also a great solution for legacy browsers lacking the element.

There is also the case of the srcset attribute that is available for several webKit based browsers for the <img> element. This can used without any JavaScript and is a great solution if non-webKit browsers are to be ignored. It is a useful stop-gap for the odd case where other solutions fall short, but should not be considered a comprehensive solution.

Picturefill

Picturefill is a polyfill library for the <picture> element. It is one of the most popular libraries among the front-end solutions to responsive images. There are two versions; Picturefill v1 mimics the <picture> element using span while Picturefill v2 uses the <picture> element among the browsers that already offer it and provides a polyfill for legacy ones (for example, for IE >= IE9). It has some limitations and work arounds, most notably for Android 2.3 – which incidentally is an example of where the img srcset attribute comes to the rescue. Below is a sample markup for using Picturefill v2:

<picture>
  <source srcset="/images/kitty_large.jpg" media="(min-width: 768px)">
  <source srcset="/images/kitty_medium.jpg" media="(max-width: 767px)">
  <img srcset="/images/kitty_small.jpg" alt="My Kitty Cat">
</picture>

To improve performance on users with limited data plans, Picturefill can be combined with lazy loading. However, the library could offer wider browser support and address the odd cases rather than relying on patches.

Imager.js

Imager.js is a library created by BBC News team to accomplish responsive images with a different approach to the one used by Picturefill. While Picturefill attempts to bring the <picture> element to unsupported browsers, Imager.js focuses on downloading only the appropriate images while also keeping an eye out for network speeds. It also incorporates lazy loading without relying on third-party libraries. It works by using placeholder elements and replacing them with <img> elements. The sample code below exhibits this behavior:

<div>
    <div class="image-load" data-src="http://example.com/images/kitty_{width}.jpg" data-alt="My Kitty Cat"></div>
</div>

<script>
    new Imager({ availableWidths: [480, 768, 1200] });
</script>

The rendered HTML will look like this:

<div>
    <img src="http://example.com/images/kitty_480.jpg" data-src="http://example.com/images/kitty_{width}.jpg" alt="My Kitty Cat" class="image-replace">
</div>
<script>
    new Imager({ availableWidths: [480, 768, 1200] });
</script>

Browser support is much better than that of Picturefill at the expense of being a more pragmatic solution than a forward thinking one.

Source Shuffling

Source Shuffling approaches the problem from a slightly different angle than the rest of front-end libraries. It resembles something out of the “mobile first” school of thought, whereby it serves the smallest resolution possible by default. Upon detecting that a device has a larger screen, it swaps the image source for a larger image. It feels like more of a hack and less of a full fledged library. This is a great solution for chiefly mobile sites but means double resource downloading for desktops and/or tablets is unavoidable.

Some other notable JavaScript libraries are:

  • HiSRC – A jQuery plugin for responsive images. Dependency on jQuery might be an issue.
  • Mobify.js – A more general library for responsive content, including images, stylesheets and even JavaScript. It ‘captures’ the DOM before resource loading. Mobify is a powerful comprehensive library, but may be overkill if the goal is just responsive images.

Summary

At the end of the day, it is up to the developer to decide which approach suits the user base. This means data collection and testing will give a better idea of the approach to take.

To wrap up, the list of questions below can be helpful to consider before deciding the appropriate approach.

  • Are legacy browsers an issue? If not, consider using a more modern approach (e.g: Picturefill, srcset attribute)
  • Is the response time critical? If not, go for a third-party or back-end solution.
  • Are the solutions supposed to be in-house? Third-party solutions will obviously be ruled out.
  • Are there lots of images already on a site that is trying to transition to responsive images? Are there concerns about validation or semantic tags (or rather non-semantic tags)? This will require a back-end solution to route the image requests to something like Adaptive Images.
  • Is art direction a problem (specifically for large images with a lot of information)? A library like Picturefill will be a better solution for such a scenario. Also, any of the back-end solutions will work as well.
  • Is there a concern about lack of JavaScript? Any of the front-end solutions will be out of the question, which leaves the back-end or third-party options that rely on UA sniffing.
  • Is there a priority for mobile response times over desktop response times? A library like Source Shuffling may be more appropriate.
  • Is there a need to provide responsive behavior to every aspect of the site, not just images? Mobify.js might work better.
  • Has the perfect world been achieved? Use CSS-only display:none approach!

Before the Drama

Best post about the recent IO.js fork for node. I’m surprised how low key the fork has happened with everyone being pretty understanding.

hueniverse

I am going to comment on the recent node fork. Soon. I am not happy about it. I also don’t think it’s bad. I’ve been involved in the conversations with most sides since May and am in a unique position being (probably) the only “guy in the middle” that I think I can provide a perspective that is more complete than most. However, before I do that I would like to defuse the drama.

Given my position at Walmart and the fact that I knew a fork is highly likely for half a year, you can imagine I had a few internal conversations about node and its future inside and outside of Walmart. A large(st) enterprise has to ensure its investments are durable and sound. I shared the situation with my senior management and the message I delivered to them is the same one I am going to deliver…

View original post 483 more words

Links I’ve encountered #3

From Markov Chains to Technocracy.

Markov Chains explained – While it is easy to explain how Markov Chains work, it is often hard to grasp the intuition behind why they work the way they do. A couple nice visualizations won’t hurt either.

How the other half works – A polarized look into the PM vs Developer perceptions. Hard to not disagree with him however anecdotal experiences are not good judgements of an entire industry.

Why Blurring Text is a Bad Idea – I will stick to the tried and proven method of MS-Paint black boxes over images. No layers, no leaks.

NodeJS: File Streams, Reading, and Piping

Lately I’ve been working with Node.js to try understanding back-end environment with JavaScript. I think it is fair to say that Node.js offers a smooth transition from front-end JavaScript to Back-end. Concepts like event loops, closures and variable scoping provide a preferable mindset to work with server management. While I have worked with Python and PHP back-ends, using Node.js was the first time I felt like I had a clue on what was going on. To prove how smooth the transition was, I managed to debug an error in less than 30 seconds without any prior experience working with file reading, streaming or piping.

Streams:

Node.js, like any back-end language, offers a simple file reading and writing module. And like most Node data modules, it uses streams abstraction to communicate between services. Streams are one of those core node ideas that are found in most data services. There are two types of Streams: readable and writeable streams. Each has its own events, behaviours and can be controlled with event listeners to your own needs. You can listen to events like connections, data, stream ending and various others (depending on the module).

Stream Problems:

However, data transfer usually come with some problems. Say you have a client to whom the server is sending data. If you happen to be reading data from an outside source, you will be doing two processes. Reading that data and sending it to the client. At certain points, you will find the client to be consuming the data slower than you are sending it. If this behaviour isn’t checked, you will end up with a slow client problem. Fortunately, Node allows you to pause streams and listen for any drained data events from client writing process.

  var fs = require("fs"),
      http = require("http");
  var readFile = fs.createReadStream("my_file.txt");

  http.createServer(function(request, response){
    
    readFile.on("data", function(data){
      if(!response.write(data)){ 
        readFile.pause(); //if we aren't writing to the client, dont read anything!
      }
    });
    
    response.on("drain", function(){
      readFile.resume();
    })
    
    readFile.on("end", function(){
      response.end();
    })
  }).listen(1337);

Using stream.pipe():

The pause and resume pattern is found repeatedly whenever data transfer is concerned throughout Node. It has solution that is simple and cleanly abstracted for most streams. A simple stream.pipe() functions magically takes care of this. In short, you have a readable stream piping to a writeable stream (i.e. readStream.pipe(writeStream)). In the previous example, you can shorten it to:

  var fs = require("fs"),
      http = require("http");
  var readFile = fs.createReadStream("my_file.txt");
  
  http.createServer(function(request, response){
    readFile.pipe(response);
  }).listen(1337)

That is it; except for a simple error that I mentioned in the first paragraph. Can you spot it?

Hint #1: Try refreshing the page.

Hint #2: Your node service needs to create a new read stream for each request.

Why Standard Economic Models Don’t Work–Our Economy is a Network

Starts with an interesting premise but I’m not too sure about the conclusions drawn

Our Finite World

The story of energy and the economy seems to be an obvious common sense one: some sources of energy are becoming scarce or overly polluting, so we need to develop new ones. The new ones may be more expensive, but the world will adapt. Prices will rise and people will learn to do more with less. Everything will work out in the end. It is only a matter of time and a little faith. In fact, the Financial Times published an article recently called “Looking Past the Death of Peak Oil” that pretty much followed this line of reasoning.

Energy Common Sense Doesn’t Work Because the World is Finite 

The main reason such common sense doesn’t work is because in a finite world, every action we take has many direct and indirect effects. This chain of effects produces connectedness that makes the economy operate as a network. This network behaves…

View original post 3,838 more words

Links I’ve encountered #2

This time, it’s impressive web demos using complex animation algorithms and basic browser tools. Oh, and an ML post.

Realistic Terrain in 130 JS lines – Pretty impressive demo. I’m still not sure what goes on with the code because of the all the WebGL & physics involved. However, it shows how far one can go with simple browser tools.

Creating 3D worlds with HTML & CSS – It seems like the new fad is doing nearly everything in basic CSS(3) that was done with JS before. This demo reminds me of countless hours wasted on CS:CZ.

Neural Networks, Manifolds & Topology – Good article with impressive visualizations to see what goes underneath NN’s hidden layers. It’s not uncommon to hear hidden layers referred to as black-boxes, however the article sheds some light underneath the hood.