Maintanable CSS architecture (how I stopped worrying and learned to love the BEM)

CSS is hard! It is hard because CSS is easy. It reads like a paradox but bear with me. CSS is relatively easy to learn, at least compared to learning programmable languages. Majority of the styling rules are clear and easy to grasp. From my experience, teaching someone from zero to reasonably-understands-CSS-and-can-style-a-web-page takes roughly a couple of weeks. I ran a class at work and nearly everyone could style elements after verbal instructions. This is rare in regular programming. Explaining what an object instance is from a class takes at least a week.

CSS is hard when you start dealing with organising layouts. Knowing what rules and where to apply is almost an art in itself. At a certain point, I cannot even explain to someone why I chose to avoid using margins and used paddings instead. But it is the subtle decisions that play the bigger part in determining whether your CSS architecture succeeds or fails. Success in this case, means deleting one preprocessor file won’t affect an unrelated section in your project. Every section should work independent of each other.

I wrote this post because I was surprised at my job how little time new CSS takes to write despite the fact that it was setup nearly 3 years ago. The only styleguide is a pre-commit hook that warns about nesting. It rarely flags except when dealing with something like

@media(min-width: 920px){
  .block__element--modifier::after {}

This is an edge case and not breaking any rules outlined below but a reminder that the pre-commit hook still works.

Please note that this post assumes you are familiar enough with CSS properties, preprocessors, and specificity. In other words, if you are at an intermediate understanding of CSS but you struggle to write maintainable CSS, then maybe it will help. If you find yourself constantly having to specify selectors like .modal > div.form-wrapper, then by all means continue reading this. If you keep overwriting bootstrap styles, maybe this is also the post for you (I know this because I used to have lots of nested .btn in my CSS).

There are several rules that are easy to follow and yet magically keep you out of trouble:

Always use classes

This one is rather straight-forward. Try as much as possible to avoid referring actual tag names or #ids. Every time you break it, drop a coin in a jar. If you find yourself actually looking for a coin jar, you have already failed. The only time it is acceptable to break this, is when you have a third-party element that is not following this rule. Even then, send a pull-request to the project explaining why they should learn to write maintainable CSS.

Learn, understand and embrace BEM

BEM is a rather simple concept to understand. It stands for Block-Element-Modifier. It sounds scary at first but it is a relatively simple idea. When writing CSS, try to break down portions of your page into independent components/sections (I will use block and component interchangeably). Lets take a landing page. You have a navigation bar, a sidebar, a call to action, an annoying popup modal, and a contact section/footer. And just like that you now understand what Block stands for in the acronym. Think of a block as part of your site that will always look and behave the same way no matter where it is found. In some circles, they refer to this as a component. Do not think of this in terms of functionality. Two chatboxes are not the same block if you have one that is small and stickied to the bottom of each page vs one that covers an entire page and functions as it’s own page. As far as functionality is concerned they might do the exact same thing, but how they look or behave relative to the rest of the page is how they should be differentiated. Each block should be given its own class. The stickied chatbox, should be .small-chatbox compared to .full-page-chatbox in the other case.

Each block is composed of child elements that together form the entire block. A barebone chatbox comprises of a chat message(s), text input box and a submit button. Each one of these is an element. Try to break down each block to its granular elements. If you have lots of previous chat messages in the previous example, dont think of the list of chat messages as one element but rather many elements. It is perfectly acceptable to wrap elements inside other elements. Just keep in mind that any CSS you write for the element should ONLY address the element styling. As a naming convention, elements of a block are given .block__element class. Below is how you would write our chat messages.

< div class='chatbox' >
  < div class="chatbox__messages-wrapper">
    < p class="chatbox__message">< /p>
    < p class="chatbox__message">< /p>
  < /div>
  < !-- TODO: form for the chatbox -->
< /div>

Modifiers are probably the most subjective part of BEM. Modifiers are CSS class names given to an existing block or element class but with slightly altered behaviour. A modifier isn’t meant to exist on its own but to enhance another block/element. In our chatbox example, lets assume a user can favourite a message. If starred message looks exactly like any other message with the exception of a golden border and slightly larger text – then by all means just add a modifier class to it. In the same scenario, if a chatbox is blinks after receiving a new message, then add a modifier class for the block. Modifier classes are written as .block__element--modifier for modifier applied to an element and .block--modifier when applied to a block. Keep in mind, a modifier CSS class does not exist on its own. If a block or element has a modifier class, it should also have a block or an element class too (>= 2 CSS classes). If you find yourself writing class="chatbox__message--starred", then you are not following BEM. It should be class="chatbox__message chatbox__message--starred". It looks odd at first but you will understand later why this is very powerful. Like the name suggests, they just modify an existing block/element. If you are unsure whether you should add a modifier class compared to creating a new element, then play it safe and create a new element. You can never go wrong with creating a new element. My rule of thumb is a modifier should not have more than 3 lines of CSS.

The hardest part of BEM is breaking down the actual layout into independent blocks. It comes with practice and, if lucky, a good UI layout.

Not everything is a block (a word about objects)

One of the pitfalls of learning BEM is trying to treat everything in your CSS as a block. Lets talk about layouts. Most real-world sites have gutters and grids fencing components from others. Most CSS helper libraries have grid classes. Lets call these layout helper classes. They exist to organise other blocks. If your page consists of only these layout helper classes, then you should see a blank page. Using a theatre analogy, they are the stage directors compared to blocks that act as the main show. You don’t need to follow BEM to write them. If you do mistake them for blocks in BEM, nothing will go wrong. But it helps to know that they exist and helps to avoid going full BEM.


Modules are large blocks that exist only once on your site. Regular blocks can appear on several pages while a module can appear on one page only. Lets say your shopping site has a section where a user can edit their account details. That is an account editing module. However, an account summary block can appear both on the edit account page and checkout pages. As a result, most CSS files will have a few modules but a lot of components. Modules still follow BEM’s guide in writing classes but they are useful to separate from regular components to ensure they take priority.


Like any large CSS project using a preprocessor, you end up requiring several helper files with things like variables, breakpoints, mixins, third party libraries, prefixes (for the unfortunate), helper functions, resets etc. Often, you drop these into your project before writing a single line of CSS.

Bringing it all to life (using SASS)

I will use SCSS but you can replace it with stylus, less etc and nothing should change. When using a preprocessor, you should always have a main file that you use as your entry point. I aptly named it main.scss. Below is a guideline on how you import the rest of the files inside this main file. Note that CSS specificity requires us to import first the less specific files, with the most important styles being the module files.

// main.scss
  - @import 'miscellaneous/main'    // Importing all overhead
  - @import 'helpers/main'          // The layout files folder
  - @import 'components/main'       // All blocks
  - @import 'modules/main'          // Modules

Inside each of these folders, we are importing the entry file which links us to all files. components/main.scss will contain

  - @import 'navigation-bar'        // contains styles for the navigation component only
  - @import 'chatbox'               // chatbox block styles

Assuming the same markup from previous chatbox example, the chatbox SCSS file (i.e components/chatbox.scss
) will look like

.chatbox {
  box-shadow: 2px 2px $fade-shadow; // color variable for my shadows

  &--alert {
    border: 1px solid $annoying-blink-color;

  &__message-wrapper {
    padding: 5px;

  &__message {
    font-size: 16px;

    &--starred {
      text-transform: underline;
      font-size: 18px;
      background-color: $golden-background;

This will output:

.chatbox {
  box-shadow: 2px 2px #000;
.chatbox--alert {
  border: 1px solid red;
.chatbox__message-wrapper {
  padding: 5px;
.chatbox__message {
  font-size: 16px;
.chatbox__message--starred {
  text-transform: underline;
  font-size: 18px;
  background-color: yellow;

First thing you should notice here, block modifiers always come before element styles. This is good way to ensure any element property is not overwritten by a block’s styling. CSS specificity should automatically protect us from such a scenario but you can never be too careful.

The other but possibly the most important things about BEM, there is NO NESTING. All my css consists of single classes only. My markup can contain up to two BEM classes but specificity is driven by the last class to be declared. All my modifiers will take priority over the block and elements. This is the desired behaviour. Modifiers are meant to overrule the original styles. The lack of nesting guarantees my CSS wont be affected if I decide to change one of the classes. It also gives that desirable specificity graph. I mentioned above how my company’s pre-commit hook will occasionally throw a false flag for @media(min-width: 920px){ .block__element--modifier::after {} }. This is still valid because we are only styling one class without referencing other selectors.

On the subject of changes breaking unrelated sections of your site, deleting one component only requires you to delete the import line and then the component style file. It wont break any other section. Deleting a module acts in a similar manner. As long as your specificity relies on the file position, everything will magically keep working.

And that is pretty much it. Follow these rules and your CSS never be easier to write. In the future, I will explore why modules are important and why we import them last. It will require more real world examples with various permutations where regular components dont suffice on their own.

Further readings to dive deep into why this style works so well:
Specificity Graph
Nesting your BEM


Links I’ve encountered #5

It has been a while since I last posted on the site. I have been busy with moving plans and holding onto several gigs at a time.

From Callback to Future -> Functor -> Monad – Documenting the movement from callback based JS on towards a more Future based setup employing functional programming techniques.

JavaScript Journey – A very interesting repo that takes on solving one JS problem in as many different ways as possible. From simple procedural loops all the way to reactive and even asm.js. Full blog post about it

Why mixins are harmful – I have only ever employed mixins when composing objects for passing state around. However, I would not advise on their usage in OOP style especially in prototypal JS. Raganwald goes farther in exploring how you can avoid certain caveats when using mixins, especially with ES6 modules.

Everything is an Object…or not!

If it is one thing that grinds my gears, is running into people who mistake prototypal inheritance for classical inheritance. Even worse, is when they argue for classical inheritance patterns without any willingness to admit they lack some prototypal knowledge. It does not help that we now have the class keyword in JS. This is not about what is superior and/or inferior but rather a distinction of the two. I will also try to track down the earliest ancestor to all JS objects.


In case you do not want to read the entire post, then I will summarize it as follows: Classical inheritance is like building houses using a blueprint, while prototypal inheritance is building an apartment building, focusing on what should be shared communally. They both have their caveats, like how do you improve upon the house/flat in each instance? Do you first edit the blueprint in classical inheritance? Do you edit the new flat on the fly in prototypal inheritance or do you just add it to the apartment building for all to share? There are no obvious answers to these questions but understanding this difference could help you structure things much more efficiently.

class vs Object.create vs new:

For the uninitiated, each of the keywords above do the exact same thing. class is a nicer way to writing constructor functions (i.e new) in ES2015. Object.create is an easier (cleaner?) way to creating objects introduced in ES5. For clarity’s sake, here is some code written using a constructor function and new:

//Constructor function using 'new'
var Animal = function(name, sound){ = name;
    this.sound = sound;

Animal.prototype.makeSound = function(){

var cat = new Animal("Tiger", "Meow");

cat.makeSound() //"Meow"

This is a trivial example but I wanted to highlight the usage of prototype (it’s in the name!). makeSound function is specifically assigned to objects prototype. If we have a behaviour that is not unique to one object, we can delegate it to a single instance. Whenever we need it, we can quickly borrow the method/property. How does an object borrow the properties? By having a link to its previous prototype through a hidden __proto__ or sometimes referred to as [[Prototype]] property.

//From the constructor function example

cat.__proto__ === Animal.prototype //true


This is where it gets interesting. If this prototype behaviour is common in every single object, there must be some object where everything points to, right? Existentialism aside, this is absolutely correct. All* objects keep inheriting properties through the hidden __proto__ all the way up. If the cat object does not have a makeSound property, it checks up on its __proto__ object and uses that method. If it is not there, it checks a level deeper (i.e: __proto__ of its own __proto__). You can make this trip yourself by referring to the __proto__‘s until you get undefined but it will not be clear where exactly this object resides. Instead, lets use deductive and inductive reasoning.

I will go with the assumption that everything in JS is an object. With the exception of primitives, we can check for the prototype chain using the instanceof operator. However, instanceof only checks whether a function is a constructor for the object in question. In essence, if all constructor functions are objects (see earlier assumption) – they must all inherit from the Function object and/or function (?). To check for this:

    Animal instanceof Function //true

OK, this is getting somewhere. From here onwards, it will get confusing but please bear with me. Function object is actually just a normal JS function that all functions inherit from. So, it must also have a prototype property, from which all functions inherit. It will also have the usual __proto__ hidden property because everything is an object (if the assumption still holds). Because functions are also objects, they can also be a part of the __proto__ chain. As strange as it sounds, a function could play both the parts of a prototype and that of a hidden __proto__. Which is exactly what is happening with Function.

Function.__proto__ === Function.prototype //true

From here onwards, we check out of functions and move on to objects (we’ve closed the loop, so to speak). Function.prototype, the key to all functions, has no prototype property (it’s set to undefined but that’s not strange – you can do that to any function). It only has a __proto__ property, meaning that we still have a chain to follow. It seems to point back to the Object function. This is the natively available function that constructs empty objects. Seeing that Object is actually a constructor function, it should also have a prototype property. This property will presumably act as part of the __proto__ chain for all instances of the Object function. Turns out, Object.prototype is actually just a plain object. Surprisingly it does not have a __proto__. This is a first for an object in JavaScript. Maybe, just maybe, we have found the single source of truth for all JS objects. There are a few checks to verify this.

Object.prototype instanceof Object //false. This means it's the only object not to do so
Object.prototype.isPrototypeOf(Function) //true
Object.prototype.isPrototypeOf(Object) //true
Object.prototype.isPrototypeOf({})  //true. Looks like Object literals inherit from it too!

So, at this point I think it is fair to say that all JS objects*, through the __proto__ chain, point to Object.prototype. Feel free to disprove this! Bonus image below for visual learners.

© Professional JavaScript for Web Developers, 3rd Edition, Nicholas C. Zakas, ISBN: 978-1-118-22219-5

* : Someone rightfully pointed out that Object.create(null) has no __proto__, essentially removing it from the prototype chain. This is the equivalent of creating an object and setting the __proto__ to null.

Links I’ve encountered #4

Merry Christmas!! After a family filled day, I decided to sneak in a couple of interesting articles I’ve read in the past week or so.

How the Grinch stole array.length access – It hits a very sensitive nerve. Caching array.length is one of those micro-optimizations that gets spread around like an urban legend; in that it sounds reasonable enough but very few actually go out to prove whether they actually work. Good thing to note is that the research done is purely on a V8 engine.

Big Data, Machine Learning, and the Social Sciences: Fairness, Accountability, and Transparency – A while back I watched Hanna Wallach’s talk on some text analysis and she intrigued me in the applications of ML techniques in the real world. This article goes in great (and better) detail in explaining what I was trying to convey a while back.

One Size Fits Some: An Examination of Responsive Image Solutions

This is an article I wrote for toptal’s blog. After getting permission from them, I decided to post it here too. It’s an observation on the latest responsive images solutions, coupled with ideas that might help making a decision on the correct solution to use.

As mobile and tablet devices come closer to achieving final world domination, web technology is in a race to accommodate the ever-growing number of screen sizes. However, devising tools to meet the challenges of this phenomenon brings a whole new set of problems, with one of the latest buzzwords to emerge being “responsive web”. This is the challenge of making the web work in most, if not all, devices without degrading the user’s experience. Instead of designing content to fit desktop or laptops, information has to be available for mobile phones, tablets or any surface connected to the web. However, this evolution has proven to be a difficult and sometimes painful one.

While it can be almost trivial to accommodate textual information, the tricky part comes when we put into consideration content like images, infographics, videos, an so forth, which were once designed with only desktops in mind. This not only brings up the question of displaying the content correctly, but also how the content itself is consumed using different devices. Users on smart phones are different to users on desktops. Things like data plans and processing speed have to be considered as well. Google has started to highlight mobile-friendly sites on its search results, with some speculating that it will lead to a substantial pagerank boost to such sites. Earlier solutions addressed this by deploying mobile-only subdomains (and redirects) but this increased complexity and fell out of fashion quickly. (Not every site has the ability to afford this route.)

On the Quest for Responsive Images

At this point, developers and designers have to ensure their website load is optimized to meet the users who are on mobile sites. Over 20% of web traffic is now mobile users, and the number is still rising. With images taking among the largest shares of page content data, it becomes a priority to reduce this load. Several attempts have been made, both server-side to front-end solutions. To discuss these solutions, we need to first understand the current image linking shortcomings.

The <img> tag has only the source attribute linking directly to the image itself. It has no way of determining the correct type of image needed without any add-ons.

Can’t we just have all the image sizes included in the HTML, and use CSS rules to display:none for all but the correct image? That is the most logical solution in a perfect logical world. This way the browser can ignore all the images not displayed and, in theory, not download them. However, browsers are optimized beyond common logic. To make the page render fast enough, the browser pre-fetches linked content before even the necessary stylesheets and JavaScript files are fully loaded. Instead of ignoring the large images intended for desktops, we end up having downloaded all images and resulting in an even larger page load. The CSS-only technique only works for images intended as background images because these can be set within the stylesheet (using media queries).

So, what’s a website to do?

Back-End Solutions

Barring mobile-only sites/sub-domains, we are left with sniffing user-agent (UA) and using it to serve the correct images back to the user. However, any developer can attest how unreliable this solution can be. New UA strings keep popping up all the time, making it strenuous to maintain and update a comprehensive list. And of course, this doesn’t even take into account the unreliability of easily-spoofed UA strings in the first place.

Adaptive Images

However, some server-side solutions are worthy of consideration. Adaptive Images is a great solution for those preferring a back-end fix. It does not require any special markup but instead uses a small JavaScript file and does most of the heavy work in its back-end file. It uses a PHP and nginx configured server. It also does not rely on any UA sniffing but instead checks for the screen width. Adaptive Images works great for scaling down images, but it’s also handy for when large images need art direction, i.e. image reduction by techniques such as cropping and rotation – not merely scaling.

Sencha Touch

Sencha Touch is another back-end solution, although it is better to refer to it as a third-party solution. Sencha Touch will resize the image accordingly by sniffing the UA. Below is a basic example of how the service works:

<img src="" alt="My Kitty Cat">

There is also an option to specify the image sizes, in case we do not want Sencha to resize the image automatically.

At the end of the day, server-side (and third party) solutions require resources to process the request before sending the correct image back. This takes precious time and in turn slows down the request-response trip. A better solution might be if the device itself determined which resources to request directly, and the server responding accordingly.

Front-End Solutions

In recent times, there have been great improvements on the browser side to address responsive images.

The <picture> element has been introduced and approved in the HTML5 specification by the W3C. It is currently not widely available on all browsers but it will not be long before it is natively available. Until then, we rely on JavaScript polyfills for the element. Polyfills are also a great solution for legacy browsers lacking the element.

There is also the case of the srcset attribute that is available for several webKit based browsers for the <img> element. This can used without any JavaScript and is a great solution if non-webKit browsers are to be ignored. It is a useful stop-gap for the odd case where other solutions fall short, but should not be considered a comprehensive solution.


Picturefill is a polyfill library for the <picture> element. It is one of the most popular libraries among the front-end solutions to responsive images. There are two versions; Picturefill v1 mimics the <picture> element using span while Picturefill v2 uses the <picture> element among the browsers that already offer it and provides a polyfill for legacy ones (for example, for IE >= IE9). It has some limitations and work arounds, most notably for Android 2.3 – which incidentally is an example of where the img srcset attribute comes to the rescue. Below is a sample markup for using Picturefill v2:

  <source srcset="/images/kitty_large.jpg" media="(min-width: 768px)">
  <source srcset="/images/kitty_medium.jpg" media="(max-width: 767px)">
  <img srcset="/images/kitty_small.jpg" alt="My Kitty Cat">

To improve performance on users with limited data plans, Picturefill can be combined with lazy loading. However, the library could offer wider browser support and address the odd cases rather than relying on patches.


Imager.js is a library created by BBC News team to accomplish responsive images with a different approach to the one used by Picturefill. While Picturefill attempts to bring the <picture> element to unsupported browsers, Imager.js focuses on downloading only the appropriate images while also keeping an eye out for network speeds. It also incorporates lazy loading without relying on third-party libraries. It works by using placeholder elements and replacing them with <img> elements. The sample code below exhibits this behavior:

    <div class="image-load" data-src="{width}.jpg" data-alt="My Kitty Cat"></div>

    new Imager({ availableWidths: [480, 768, 1200] });

The rendered HTML will look like this:

    <img src="" data-src="{width}.jpg" alt="My Kitty Cat" class="image-replace">
    new Imager({ availableWidths: [480, 768, 1200] });

Browser support is much better than that of Picturefill at the expense of being a more pragmatic solution than a forward thinking one.

Source Shuffling

Source Shuffling approaches the problem from a slightly different angle than the rest of front-end libraries. It resembles something out of the “mobile first” school of thought, whereby it serves the smallest resolution possible by default. Upon detecting that a device has a larger screen, it swaps the image source for a larger image. It feels like more of a hack and less of a full fledged library. This is a great solution for chiefly mobile sites but means double resource downloading for desktops and/or tablets is unavoidable.

Some other notable JavaScript libraries are:

  • HiSRC – A jQuery plugin for responsive images. Dependency on jQuery might be an issue.
  • Mobify.js – A more general library for responsive content, including images, stylesheets and even JavaScript. It ‘captures’ the DOM before resource loading. Mobify is a powerful comprehensive library, but may be overkill if the goal is just responsive images.


At the end of the day, it is up to the developer to decide which approach suits the user base. This means data collection and testing will give a better idea of the approach to take.

To wrap up, the list of questions below can be helpful to consider before deciding the appropriate approach.

  • Are legacy browsers an issue? If not, consider using a more modern approach (e.g: Picturefill, srcset attribute)
  • Is the response time critical? If not, go for a third-party or back-end solution.
  • Are the solutions supposed to be in-house? Third-party solutions will obviously be ruled out.
  • Are there lots of images already on a site that is trying to transition to responsive images? Are there concerns about validation or semantic tags (or rather non-semantic tags)? This will require a back-end solution to route the image requests to something like Adaptive Images.
  • Is art direction a problem (specifically for large images with a lot of information)? A library like Picturefill will be a better solution for such a scenario. Also, any of the back-end solutions will work as well.
  • Is there a concern about lack of JavaScript? Any of the front-end solutions will be out of the question, which leaves the back-end or third-party options that rely on UA sniffing.
  • Is there a priority for mobile response times over desktop response times? A library like Source Shuffling may be more appropriate.
  • Is there a need to provide responsive behavior to every aspect of the site, not just images? Mobify.js might work better.
  • Has the perfect world been achieved? Use CSS-only display:none approach!

Before the Drama

Best post about the recent IO.js fork for node. I’m surprised how low key the fork has happened with everyone being pretty understanding.


I am going to comment on the recent node fork. Soon. I am not happy about it. I also don’t think it’s bad. I’ve been involved in the conversations with most sides since May and am in a unique position being (probably) the only “guy in the middle” that I think I can provide a perspective that is more complete than most. However, before I do that I would like to defuse the drama.

Given my position at Walmart and the fact that I knew a fork is highly likely for half a year, you can imagine I had a few internal conversations about node and its future inside and outside of Walmart. A large(st) enterprise has to ensure its investments are durable and sound. I shared the situation with my senior management and the message I delivered to them is the same one I am going to deliver…

View original post 483 more words

Links I’ve encountered #3

From Markov Chains to Technocracy.

Markov Chains explained – While it is easy to explain how Markov Chains work, it is often hard to grasp the intuition behind why they work the way they do. A couple nice visualizations won’t hurt either.

How the other half works – A polarized look into the PM vs Developer perceptions. Hard to not disagree with him however anecdotal experiences are not good judgements of an entire industry.

Why Blurring Text is a Bad Idea – I will stick to the tried and proven method of MS-Paint black boxes over images. No layers, no leaks.

NodeJS: File Streams, Reading, and Piping

Lately I’ve been working with Node.js to try understanding back-end environment with JavaScript. I think it is fair to say that Node.js offers a smooth transition from front-end JavaScript to Back-end. Concepts like event loops, closures and variable scoping provide a preferable mindset to work with server management. While I have worked with Python and PHP back-ends, using Node.js was the first time I felt like I had a clue on what was going on. To prove how smooth the transition was, I managed to debug an error in less than 30 seconds without any prior experience working with file reading, streaming or piping.


Node.js, like any back-end language, offers a simple file reading and writing module. And like most Node data modules, it uses streams abstraction to communicate between services. Streams are one of those core node ideas that are found in most data services. There are two types of Streams: readable and writeable streams. Each has its own events, behaviours and can be controlled with event listeners to your own needs. You can listen to events like connections, data, stream ending and various others (depending on the module).

Stream Problems:

However, data transfer usually come with some problems. Say you have a client to whom the server is sending data. If you happen to be reading data from an outside source, you will be doing two processes. Reading that data and sending it to the client. At certain points, you will find the client to be consuming the data slower than you are sending it. If this behaviour isn’t checked, you will end up with a slow client problem. Fortunately, Node allows you to pause streams and listen for any drained data events from client writing process.

  var fs = require("fs"),
      http = require("http");
  var readFile = fs.createReadStream("my_file.txt");

  http.createServer(function(request, response){
    readFile.on("data", function(data){
        readFile.pause(); //if we aren't writing to the client, dont read anything!
    response.on("drain", function(){
    readFile.on("end", function(){

Using stream.pipe():

The pause and resume pattern is found repeatedly whenever data transfer is concerned throughout Node. It has solution that is simple and cleanly abstracted for most streams. A simple stream.pipe() functions magically takes care of this. In short, you have a readable stream piping to a writeable stream (i.e. readStream.pipe(writeStream)). In the previous example, you can shorten it to:

  var fs = require("fs"),
      http = require("http");
  var readFile = fs.createReadStream("my_file.txt");
  http.createServer(function(request, response){

That is it; except for a simple error that I mentioned in the first paragraph. Can you spot it?

Hint #1: Try refreshing the page.

Hint #2: Your node service needs to create a new read stream for each request.

Why Standard Economic Models Don’t Work–Our Economy is a Network

Starts with an interesting premise but I’m not too sure about the conclusions drawn

Our Finite World

The story of energy and the economy seems to be an obvious common sense one: some sources of energy are becoming scarce or overly polluting, so we need to develop new ones. The new ones may be more expensive, but the world will adapt. Prices will rise and people will learn to do more with less. Everything will work out in the end. It is only a matter of time and a little faith. In fact, the Financial Times published an article recently called “Looking Past the Death of Peak Oil” that pretty much followed this line of reasoning.

Energy Common Sense Doesn’t Work Because the World is Finite 

The main reason such common sense doesn’t work is because in a finite world, every action we take has many direct and indirect effects. This chain of effects produces connectedness that makes the economy operate as a network. This network behaves…

View original post 3,838 more words