• Holographic user interfaces have been envisioned for years and were the stuff of science fiction – but this changed in January of 2015 with the announcement of the Microsoft HoloLens. Now, with the developer kit available to the general public, the HoloLens is being used to harness the power of Mixed Reality to create some of the very first holographic UIs. Holographic UIs are powerful because they have true spatial awareness – and are able to use this space around us to contextualize and ground the visualizations and interfaces. Moreover, the device is untethered which allows for the user to fully engage with holograms without worries about tripping over cords. One downside of such interfaces is they are only visible to persons wearing headsets, and as such, until holographic computing is ubiquitous, we will always need to consider the experience for bystanders. This blog post will focus on how to capture this mixed reality world for viewing on traditional 2D devices.

    Capturing Mixed Reality will make your experiences powerful

    The easiest way to bring the holographic experience to bystanders is through the use of mixed reality capture on the HoloLens device. As part of the operating system, the HoloLens is capable of capturing frames from the onboard RGB camera and augmenting these with the holograms being presented to the user. This technique works for both video as well as images, and does a great job of showing precisely what the user can see from the first perspective. While this is easily accomplished, it is not as powerful because we cannot see the user of the experience – and based on studies, body language makes up over 50% of the way that we communicate to one another. As a demonstration, look at these contrasting pictures of the same experience:

    Hololens-ball1

    In order to allow for the capture of mixed reality from a third perspective, it is obviously necessary to introduce another camera, but the question is how to qualify the position of this camera compared to the person wearing the HoloLens.

    Hololens-ball2

    The most simplistic way of doing this is to use a secondary HoloLens unit to create a ‘shared experience’ between the two units.  Shared experiences use the spatial anchoring system built into the HoloLens in order to allow two HoloLens units to see holograms in the same position, orientation, and scale.  Using these shared experiences, we can simply use the second HoloLens to capture a ‘first-person’ video/image, and because they are sharing the experience, the holograms should be accurately placed.  This technique works reasonably well, but isn’t ideal in some ways because it requires two HoloLens units, and the resulting image can only be captured at the resolution of the 720p webcam.

    The better way to achieve this is through the use of an external camera.  This is, in fact, the exact approach taken by Microsoft in their creation of their Birds Eye View (BEV) camera which has been demonstrated on stage at //build and other conferences.  In their custom setup, we can simplify the architecture as follows:

     

    hololens-diagram1

    This setup is reliant on 3 different computers coordinating to stream positional information in order to create a compelling shared experience which could be recorded / streamed to audiences.  This setup is ideal for these scenarios, but isn’t really feasible on smaller scales, or without the engineering effort required to do the actual composition.

    Here’s how to create your own third-person Mixed Reality capture

    Since Infusion does a number of public events where we want to capture mixed reality, we wanted to create our own version of Microsoft’s BEV camera, but make it much less dependent on the extensive and unwieldy setup required for this camera.  In order to make this happen, we settled on the following simplified architecture:

     

    Hololens-Diagram2

    This setup replaces many of the coordination responsibilities of the composition PC with Azure and SignalR, and uses the HoloLens itself in order to visualize the holograms from the perspective of the camera.  Specifically, the process to capture mixed reality images is as follows:

    1. Setup:
      1. The HoloLens, the Camera Service, and the Mobile app all connect to the same SignalR bus.
      2. HoloLens is moved to the position and orientation of the canon camera, and a voice command is given in order to locate the camera in the scene (using a spatial anchor)
    2. Camera Capture:
      1. A take picture command is issued by the mobile client to SignalR
      2. The HoloLens takes a picture of the holograms from the perspective and aspect ratio of the camera and uses a transparent background
      3. The camera takes a regular picture
      4. The pictures are sent asynchronously to the Azure blob service with known identifiers
    3. Composition
      1. A Web Job running in Azure composes the picture
      2. The output is written back to blob store in an output folder
      3. A link to the file is pushed back via SignalR for display or sharing in the mobile application

    This architecture has a number of benefits over the full BEV solution – namely that we can offload the processing into Azure, and do not require the same expensive recording rig that is required for the BEV camera.  Moreover, because the web job is just combining images, it is possible to include watermarks or any other transparent layers in the process to better personalize your mixed reality captures.

    This technique is highly recommended for use in public demonstrations of the HoloLens and leaves your customers with a shareable image of their first interaction with the world of holograms.

  • 0
  • 6

Holographic Moments
in Mixed Reality

  • In my opinion, React is game changer. It proved that nowadays JavaScript VMs are incredibly fast and the main bottleneck of the web applications is DOM. With their virtual DOM approach they influenced on many other players such as Ember, Backbone Marionette and Ampersand.
    At the same time there is another important newcomer – Flux. Flux is the application architecture which based on idea of unidirectional data flow and commonly used with React. And in its turn it showed that the unidirectional data flow provides a great stability and scalability and also made influence on other frameworks.
    Both these tools introduce completely new paradigm of web applications development which will take some time to ‘grok’ by newcomers.
    In this article I would like to share my experience of working with these great tools and describe key factors, ideas behind them and some extra tools which are commonly used with them like Immutable and React-Router.
    I will omit basics, so if you are not familiar with them it’s better to read docs before.

    DISCLAIMER: It’s very opinionated article because basically based on my experience, so many things might be controversial or completely wrong.

    React

    Thinking in components

    There is a good article by Pete Hunt, one of the developers of React / Flux, describing how to change your mindset to be able to work with React and do it quickly and performant. In fact, the first thing you face with when start working with React is that there are no controllers, views and even models – all these things we used to have in regular frameworks. And I didn’t mention JSX syntax which adds extra overhead. So, it really takes some time to figure out how to build your app using these tools.
    But the main idea is very simple – you should think in terms of components, you need to brake your whole application into multiple small and simple components. Carefully select entry point for you data (I will explain it later) and partially spread the data over the child components.
    It turns out that component-based approach lets you easier focus on your app and better understand it, especially when you have only one method which define how your component should look like.

    Keep it simple

    Although it’s applicable to all parts of your app, it’s worth to mention about it in context of components. Try to keep them small, simple and make them fit in one screen. It will bring good readability and modularity for your components.

    Containers

    If you read React documentation, you’ve probably noticed that through the all pages runs the one idea – all external data is kept in ‘props’ and all component’s state is kept in ‘state’. It’s rather clear until you get to the top level component and realize that you don’t have an ‘entry point’ for your ‘props’. And here is where containers come.

    The containers are special components which communicate with stores (or other data source if you don’t use Flux) and spread it over the child components. The place where they keep data is their own state. This is the point where you should carefully compose you components because it will affect on your rendering performance. Despite the fact that React rendering is very fast you can boost it avoiding unnecessary rendering process and the smart comprising is a key.

    The thing is that every time when your store (or other data source) emit change event your subscribed containers receive this data, set their own state and render its content. And if you pass surplus data to your child components they will proceed unnecessary rendering.

    PureRender mixin

    It’s a React’s built in addon and you should use it everywhere, except, probably, containers.
    As I mentioned above, when container receives the data it spreads its state over the child components which occur rendering. But what if your container listens multiple stores and you have to re-render the whole bunch of not-related to this store components? By default, your component will render its content and React will differ the output with the previous one and if they are equal will not touch the DOM.
    But we can avoid this diff operation using ‘shouldComponentUpdate’ hook and compare new props with existing ones, if they are equal return false, which will tell React to not render this component.
    Basically, this’s what PureRender mixin does – makes shallowly comparison of the objects.
    It works well if your data is not complex or immutable object (which we’ll discuss below).

    With complex, deep nested objects it’s better to pass it further and make check in places where the data is more simple. Actually, that’s the reason why using immutable data structures is great idea.

    Immutable objects

    As wiki says – immutable object is an object whose state cannot be modified after it was created. It means, that every time when you change a state of this object – you get new object with updated value.
    And it greatly solves the problem above – when you have a complex object, making deep comparison is rather costly operation. In case of immutable objects you just compare objects by reference and that’s it. Fast and simple.

    There are bunch of libraries but most popular is Immutable.js developed by Facebook. It’s really fast and has very consistent, rich API.

    Every time I write React application I use Immutable.js. It gives me full control over the data and its safety.

    State as temporary store

    It may sounds ridiculous at the first time, but it starts making sense when you face with forms.
    When you have a form and data to edit, first intention is – put your temporary data to store or whatever you use to keep app state.

    But going this way you will end up with messy code and unstable app state. It’s much easier and faster to keep intermediate data in component’s state, which gives you possibility to easily rollback your changes or commit them. That’s where mixins like ‘LinkedStateMixin’ / ‘LinkedImmutableStateMixin’ and two-way data binding come in the game. You just attach a ‘valueLink’ property to your component and specify property name and when you need to persist your changes just pass state to action!

    Flux

    Let’s talk about Flux.
    Flux, as I mentioned above, is the application architecture which based on idea of unidirectional data flow, created by Facebook. The interesting thing is that Facebook provided just a pure idea without particular implementation, except event dispatcher. It led to multiple different implementations of this idea from community. Some of them are pure implementations, others contain changes.

    My favorite one is Alt from Airbnb folks – it’s pure implementation of Flux with very laconic API which saves you from tons of boilerplate code.

    Despite that I will provide advices regarding Flux in context of Alt, I’m sure it will be applicable to any other Flux implementation.

    Actions are synchronous

    Usually the first intention to do when you have an action with async operations like ajax request is return a promise object or pass a callback to your action. It’s wrong and breaks flux fundamental idea – all actions are synchronous. There are a lot of discussions and responses from Facebook folks regarding this.

    So, how to handle async actions in sync manner? Since our stores are subscribed to actions it’s very easy to do! For example, you have an action creator called ‘SearchActions’ and action which perform async operation ‘find’. All you need to do is add two actions more – ‘findComplete’ and ‘findFail’ and call one of them depending on you async action result. In its turn, store just should subscribe to them and change its data basing on results.

    This scenario is rather generic and fit to most cases, so even when you need to reflect the async operation in your UI you can easily implement it just adding flag to your store (something like ‘isLoading’) and set it to true when ‘find’ is fired and to false when either ‘findComplete’ or ‘findFail’ are fired, then render your components in proper state.

    As you can see, it’s completely different way of data flow and handling, than we used to.

    Use services

    Despite that it seems actions are good place of keeping your business logic they are not. And the reason is that default Dispatcher (which the only part was provided by Facebook for Flux implementations) doesn’t allow you to invoke one action inside another one (well, technically you can do that, simply put your code in setTimeout, but it’s kinda a hack) you can’t perform some complex actions which involve others. So that it’s better to move your logic code into services which will give you flexibility, testability and modularity. In fact, it turned out that moving out business logic from actions provides you a good chance to cover your app by tests more precisely.

    Moreover, the rule of ‘only sync’ operations is not required for services, so that you can freely use promises or callback over there.

    Small stores

    The same as with components – keep your stores simple. It means it doesn’t need to keep all your app data in one store which will lead to re render the whole app every time it’s been changed.
    Since stores and containers are tightly coupled, composing the components involves stores as well and you need to think carefully about how to compose your stores to avoid unnecessary re-rendering. But don’t go too far with that 🙂

    React Router

    Most of the SPA use routing system to navigate among pages and React based apps are not exception. While you can use any of existed routing libraries the most popular is React Router.

    React Router was created under influence of Ember routing system and it’s really cool.

    Use router object

    Despite that React Router provides mixin to your components for transitions it’s better to initialize the router as object and do all transitions inside your actions.
    Why? Because of synchronous nature of Flux. If your transition doesn’t require async operation like data fetching from the server it’s ok to do this inside your component, but when your transition requires – it’s better to put it inside action (actionComplete for example). It will give you more control over the flow.

    Route components

    Routes in React Router are components which define what components should be rendered inside. While it’s possible to render all things you need I highly recommend to use these kinds of components only as entry points for containers and handling transitions.
    It will give you a good separation of concerns and clear code isolation.

    React Router is good but not enough

    Actually, I’d say that React Router is a good try to create routing system for React apps, but, from my perspective, it doesn’t provide solutions for Flux based apps for async operations.

    Every time when you need to proceed async call during transition to route first intention you get is brake Flux paradigm of sync data flow and make async operation inside ‘willTransitionTo’ hook which is of course completely wrong. It leads to finding workarounds how to better solve this in Flux way.
    In common cases it’s enough just to call action inside the hook, but for situations where you need more precise order of execution you have to find a better solutions.
    So that, I think React Router is not silver bullet and you can try other solutions like Fluxible-Router by Yahoo.

    Conclusion

    Although React world is rather young it already made a significant shift in front-end development with their revolutionary and not ordinary approaches. But due to its age it has lack of settled best practices and complete solutions, especially it applies to Flux. It leads to multiple approaches which usually are inconsistent and break fundamental ideas.

    Actually, it would be really helpful if Facebook provided complete framework based on both Flux and React. And it seems, that Facebook realized this need and announced new framework called ‘Relay’ which should solve many problems related to proper data fetching and mutating.
    But for now, we have to find best practices and solutions by our own and spread the knowledge .

  • 0
  • 54
  • In my opinion, React is game changer. It proved that nowadays JavaScript VMs are incredibly fast and the main bottleneck of the web applications is DOM. With their virtual DOM approach they influenced on many other players such as Ember, Backbone Marionette and Ampersand.
    At the same time there is another important newcomer – Flux. Flux is the application architecture which based on idea of unidirectional data flow and commonly used with React. And in its turn it showed that the unidirectional data flow provides a great stability and scalability and also made influence on other frameworks.
    Both these tools introduce completely new paradigm of web applications development which will take some time to ‘grok’ by newcomers.
    In this article I would like to share my experience of working with these great tools and describe key factors, ideas behind them and some extra tools which are commonly used with them like Immutable and React-Router.
    I will omit basics, so if you are not familiar with them it’s better to read docs before.

    DISCLAIMER: It’s very opinionated article because basically based on my experience, so many things might be controversial or completely wrong.

    React

    Thinking in components

    There is a good article by Pete Hunt, one of the developers of React / Flux, describing how to change your mindset to be able to work with React and do it quickly and performant. In fact, the first thing you face with when start working with React is that there are no controllers, views and even models – all these things we used to have in regular frameworks. And I didn’t mention JSX syntax which adds extra overhead. So, it really takes some time to figure out how to build your app using these tools.
    But the main idea is very simple – you should think in terms of components, you need to brake your whole application into multiple small and simple components. Carefully select entry point for you data (I will explain it later) and partially spread the data over the child components.
    It turns out that component-based approach lets you easier focus on your app and better understand it, especially when you have only one method which define how your component should look like.

    Keep it simple

    Although it’s applicable to all parts of your app, it’s worth to mention about it in context of components. Try to keep them small, simple and make them fit in one screen. It will bring good readability and modularity for your components.

    Containers

    If you read React documentation, you’ve probably noticed that through the all pages runs the one idea – all external data is kept in ‘props’ and all component’s state is kept in ‘state’. It’s rather clear until you get to the top level component and realize that you don’t have an ‘entry point’ for your ‘props’. And here is where containers come.

    The containers are special components which communicate with stores (or other data source if you don’t use Flux) and spread it over the child components. The place where they keep data is their own state. This is the point where you should carefully compose you components because it will affect on your rendering performance. Despite the fact that React rendering is very fast you can boost it avoiding unnecessary rendering process and the smart comprising is a key.

    The thing is that every time when your store (or other data source) emit change event your subscribed containers receive this data, set their own state and render its content. And if you pass surplus data to your child components they will proceed unnecessary rendering.

    PureRender mixin

    It’s a React’s built in addon and you should use it everywhere, except, probably, containers.
    As I mentioned above, when container receives the data it spreads its state over the child components which occur rendering. But what if your container listens multiple stores and you have to re-render the whole bunch of not-related to this store components? By default, your component will render its content and React will differ the output with the previous one and if they are equal will not touch the DOM.
    But we can avoid this diff operation using ‘shouldComponentUpdate’ hook and compare new props with existing ones, if they are equal return false, which will tell React to not render this component.
    Basically, this’s what PureRender mixin does – makes shallowly comparison of the objects.
    It works well if your data is not complex or immutable object (which we’ll discuss below).

    With complex, deep nested objects it’s better to pass it further and make check in places where the data is more simple. Actually, that’s the reason why using immutable data structures is great idea.

    Immutable objects

    As wiki says – immutable object is an object whose state cannot be modified after it was created. It means, that every time when you change a state of this object – you get new object with updated value.
    And it greatly solves the problem above – when you have a complex object, making deep comparison is rather costly operation. In case of immutable objects you just compare objects by reference and that’s it. Fast and simple.

    There are bunch of libraries but most popular is Immutable.js developed by Facebook. It’s really fast and has very consistent, rich API.

    Every time I write React application I use Immutable.js. It gives me full control over the data and its safety.

    State as temporary store

    It may sounds ridiculous at the first time, but it starts making sense when you face with forms.
    When you have a form and data to edit, first intention is – put your temporary data to store or whatever you use to keep app state.

    But going this way you will end up with messy code and unstable app state. It’s much easier and faster to keep intermediate data in component’s state, which gives you possibility to easily rollback your changes or commit them. That’s where mixins like ‘LinkedStateMixin’ / ‘LinkedImmutableStateMixin’ and two-way data binding come in the game. You just attach a ‘valueLink’ property to your component and specify property name and when you need to persist your changes just pass state to action!

    Flux

    Let’s talk about Flux.
    Flux, as I mentioned above, is the application architecture which based on idea of unidirectional data flow, created by Facebook. The interesting thing is that Facebook provided just a pure idea without particular implementation, except event dispatcher. It led to multiple different implementations of this idea from community. Some of them are pure implementations, others contain changes.

    My favorite one is Alt from Airbnb folks – it’s pure implementation of Flux with very laconic API which saves you from tons of boilerplate code.

    Despite that I will provide advices regarding Flux in context of Alt, I’m sure it will be applicable to any other Flux implementation.

    Actions are synchronous

    Usually the first intention to do when you have an action with async operations like ajax request is return a promise object or pass a callback to your action. It’s wrong and breaks flux fundamental idea – all actions are synchronous. There are a lot of discussions and responses from Facebook folks regarding this.

    So, how to handle async actions in sync manner? Since our stores are subscribed to actions it’s very easy to do! For example, you have an action creator called ‘SearchActions’ and action which perform async operation ‘find’. All you need to do is add two actions more – ‘findComplete’ and ‘findFail’ and call one of them depending on you async action result. In its turn, store just should subscribe to them and change its data basing on results.

    This scenario is rather generic and fit to most cases, so even when you need to reflect the async operation in your UI you can easily implement it just adding flag to your store (something like ‘isLoading’) and set it to true when ‘find’ is fired and to false when either ‘findComplete’ or ‘findFail’ are fired, then render your components in proper state.

    As you can see, it’s completely different way of data flow and handling, than we used to.

    Use services

    Despite that it seems actions are good place of keeping your business logic they are not. And the reason is that default Dispatcher (which the only part was provided by Facebook for Flux implementations) doesn’t allow you to invoke one action inside another one (well, technically you can do that, simply put your code in setTimeout, but it’s kinda a hack) you can’t perform some complex actions which involve others. So that it’s better to move your logic code into services which will give you flexibility, testability and modularity. In fact, it turned out that moving out business logic from actions provides you a good chance to cover your app by tests more precisely.

    Moreover, the rule of ‘only sync’ operations is not required for services, so that you can freely use promises or callback over there.

    Small stores

    The same as with components – keep your stores simple. It means it doesn’t need to keep all your app data in one store which will lead to re render the whole app every time it’s been changed.
    Since stores and containers are tightly coupled, composing the components involves stores as well and you need to think carefully about how to compose your stores to avoid unnecessary re-rendering. But don’t go too far with that 🙂

    React Router

    Most of the SPA use routing system to navigate among pages and React based apps are not exception. While you can use any of existed routing libraries the most popular is React Router.

    React Router was created under influence of Ember routing system and it’s really cool.

    Use router object

    Despite that React Router provides mixin to your components for transitions it’s better to initialize the router as object and do all transitions inside your actions.
    Why? Because of synchronous nature of Flux. If your transition doesn’t require async operation like data fetching from the server it’s ok to do this inside your component, but when your transition requires – it’s better to put it inside action (actionComplete for example). It will give you more control over the flow.

    Route components

    Routes in React Router are components which define what components should be rendered inside. While it’s possible to render all things you need I highly recommend to use these kinds of components only as entry points for containers and handling transitions.
    It will give you a good separation of concerns and clear code isolation.

    React Router is good but not enough

    Actually, I’d say that React Router is a good try to create routing system for React apps, but, from my perspective, it doesn’t provide solutions for Flux based apps for async operations.

    Every time when you need to proceed async call during transition to route first intention you get is brake Flux paradigm of sync data flow and make async operation inside ‘willTransitionTo’ hook which is of course completely wrong. It leads to finding workarounds how to better solve this in Flux way.
    In common cases it’s enough just to call action inside the hook, but for situations where you need more precise order of execution you have to find a better solutions.
    So that, I think React Router is not silver bullet and you can try other solutions like Fluxible-Router by Yahoo.

    Conclusion

    Although React world is rather young it already made a significant shift in front-end development with their revolutionary and not ordinary approaches. But due to its age it has lack of settled best practices and complete solutions, especially it applies to Flux. It leads to multiple approaches which usually are inconsistent and break fundamental ideas.

    Actually, it would be really helpful if Facebook provided complete framework based on both Flux and React. And it seems, that Facebook realized this need and announced new framework called ‘Relay’ which should solve many problems related to proper data fetching and mutating.
    But for now, we have to find best practices and solutions by our own and spread the knowledge .

  • 0
  • 54

React way

  • ES6

    Everyone probably heard about ES6 and how it is awesome, new features, classes etc. ES6 was released in June but it is still unsupported by most browser. „But I want to use it”, what to do ? You can use transpilers. What is transpiler ? It is a type of compiler that takes the source code of a programming language as its input and outputs the source code into another programming language. In our specific example it take ES6 code and transform into ES5. Actually on the market there is more than one available transpilters. I personally prefer to use BabelJS. You can try it on playground on https://babeljs.io/. It awesome.

    Starting with Babel

    You can simply install Babel using npm (Node Package Manager)

    npm install [-g] babel
    

    After you do this we can create simple es6 code

    let numbers = [1, 2, 3, 4, 5, 6, 11, 23, 31, 1, 2];
    let kms = [];
    numbers.forEach(number => kms.push(number.toString() + ' km'));
    

    Let’s save this code in test.js

    In above example I am using arrow function from c#. As we see it is simpler than in es5.

    You can type from shell (I use window command line)

    babel test.js –out-file testes5.js
    

    to compile above code.

    so the result is

    'use strict';
    
    var numbers = [1, 2, 3, 4, 5, 6, 11, 23, 31, 1, 2];
    var kms = [];
    numbers.forEach(function (number) {
      return kms.push(number.toString() + ' km');
    });
    

    Babel CLI provides many options. For more details see https://babeljs.io/docs/usage/cli/.

    This solution is not perfect for development. But there is a good news, because you can use Babel in gulp, grunt, babael and even bruch. I will show you how to use Babel in broccoli.

    To utilize Babel in Broccoli you need to get broccoli-babel-transpiler from npm

    Installation
    npm install broccoli-babel-transpiler [--save-dev]
    

    Next create your brocfile.js, setup broccoli Babel transpiler.

    'use strict';
    var babel = require('broccoli-babel-transpiler');
    var testES6 = babel('src');
    module.exports= testES6;
    

    Broccili builds dist from command line. After that your structure should look like

    Broccoli will automatically creates dist folder, grab js file from src folder, compile into es5 and put into dist folder.

  • 0
  • 13
  • ES6

    Everyone probably heard about ES6 and how it is awesome, new features, classes etc. ES6 was released in June but it is still unsupported by most browser. „But I want to use it”, what to do ? You can use transpilers. What is transpiler ? It is a type of compiler that takes the source code of a programming language as its input and outputs the source code into another programming language. In our specific example it take ES6 code and transform into ES5. Actually on the market there is more than one available transpilters. I personally prefer to use BabelJS. You can try it on playground on https://babeljs.io/. It awesome.

    Starting with Babel

    You can simply install Babel using npm (Node Package Manager)

    npm install [-g] babel
    

    After you do this we can create simple es6 code

    let numbers = [1, 2, 3, 4, 5, 6, 11, 23, 31, 1, 2];
    let kms = [];
    numbers.forEach(number => kms.push(number.toString() + ' km'));
    

    Let’s save this code in test.js

    In above example I am using arrow function from c#. As we see it is simpler than in es5.

    You can type from shell (I use window command line)

    babel test.js –out-file testes5.js
    

    to compile above code.

    so the result is

    'use strict';
    
    var numbers = [1, 2, 3, 4, 5, 6, 11, 23, 31, 1, 2];
    var kms = [];
    numbers.forEach(function (number) {
      return kms.push(number.toString() + ' km');
    });
    

    Babel CLI provides many options. For more details see https://babeljs.io/docs/usage/cli/.

    This solution is not perfect for development. But there is a good news, because you can use Babel in gulp, grunt, babael and even bruch. I will show you how to use Babel in broccoli.

    To utilize Babel in Broccoli you need to get broccoli-babel-transpiler from npm

    Installation
    npm install broccoli-babel-transpiler [--save-dev]
    

    Next create your brocfile.js, setup broccoli Babel transpiler.

    'use strict';
    var babel = require('broccoli-babel-transpiler');
    var testES6 = babel('src');
    module.exports= testES6;
    

    Broccili builds dist from command line. After that your structure should look like

    Broccoli will automatically creates dist folder, grab js file from src folder, compile into es5 and put into dist folder.

  • 0
  • 13

ES6

  • Brief

    When starting developing a project, JavaScript developer feels like blessed and cursed at same time. There is almost unlimited number of frameworks to choose, not counting other ‘helper’ tools. We have multiple package managers (npm, bower, also nuget in VS ecosphere) – not every package is present there, some must be downloaded directly from github or other repository.
    Also there are issues with managing code written in EcmaScript5. Developers often tend to avoid them by using meta-languages over JavaScript like TypeScript or CoffeScript or (which is somewhat a new way) EcmaScript6. Since browsers don’t recognize meta-languages nor ES6, developer must use compilers/transpilers to produce valid ES5 code utilizing Grunt, Gulp or other build tools. The solution grows along with its complication…

    JSPM

    Here comes the JSPM, the modern package manager build on the top of SystemJS. SystemJS is a module loader that implements es6-module-loader and it’s capable of loading variety of module formats:

    • AMD-style (RequireJS)
    • CommonJS (node.js)
    • ES6
    • Globals
    • … And even TypeScript/CoffeScript/React’s JSX directly!

    JSPM adds a fresh package manager ecosystem to SystemJS with auto-configuration made during package installation. In less advanced solutions, there is no need to modify a single line of module loader configuration manually.

    Installing JSPM

    The installation procedure is fairly straightforward. First, you need to install JSPM as a global NPM module:

    npm install –g jspm
    

    Now, having JSPM installed in the system, you can initialize its configuration. Navigate to root folder of your project and type

    jspm init
    

    The script will ask you few questions, from which the most important is the module format exposed by your scripts. After answering them, the JSPM will perform additional tasks:

    • Create config.js file, a configuration for SystemJS module loader
    • Create package.json file which holds information about your project
    • Download required packages: SystemJS, es6-module-loader and ES6 transpiler (Traceur or Babel, now identified as 6to5).

    Running

    The following piece of HTML is a minimum required to run application:

    index.html

    <!-- index.html -->
    <!DOCTYPE html>
    <html>
      <head>
        <script src="jspm_packages/system.js"></script>
        <script src="config.js"></script>
        <script>
          System.import('app/main');
        </script>
      </head>
      <body>
        <div id=”output”></div>
      </body>
    </html>
    

    app/main.js

    //main.js
    var element = document.getElementById(“output”);
    setInterval(() => {
        element.innerText = new Date().toString();
    }, 100);
    

    Wait, wait… what’s doing that lambda expression there? Is it JavaScript?

    Yes! It’s ES6 arrow function allowing “this” object to behave more consequently. In above example, it has no effect, but when classes come everything changes.

    Let’s try to open index.html in browser. You’ll see a simple clock displaying current date and time.

    If the example works it’s the time for the next step. I prepared a simple clock that utilizes jQuery and moment libraries. It’s written in ES6 and it’s ready to use with JSPM. First, install it:

    jspm install web-clock=github:orkisz/web-clock@0.1.0
    

    The above command will install web-clock package from my github repository and register it project-wide under name “web-clock”.

    Now you can modify app/main.js to use it:

    //main.js
    import wc from "web-clock";
    
    var clock = new wc("#output");
    
    clock.start();
    

    How JSPM will know what is under “web-clock”. Let’s see config.js file, under “map” section there should be a line:

    "web-clock": "github:orkisz/web-clock@0.1.0"
    

    If you browse project folder, you will realize there is a folder jspm_packages which holds all project’s dependences. The “web-clock” package is placed under github\orkisz\web-clock@0.1.0 directory. The pattern is [repository][repo_owner][repo_name]@[version].

    Advantages

    The biggest advantage over pair of CommonJS + Browserify is capability to run without build tool like Grunt or Gulp that precompiles file every time it changes. All the thing is done in browser. Naturally precompilation for production releases is also possible.
    The main benefit over RequireJS is easy and straightforward configuration. As was mentioned, in most cases there is no need to modify config.js file manually. Just install all dependences and voilà! In the case default module configuration does not fit your wants there is an easy way to replace it entirely or partially through local overrides.

    Conclusion

    Nowadays web applications can be so big that ES5 code become unmanageable. Structuring big client-side solutions is eased by TypeScript or CoffeScript but it’s not always an eligible solution. JSPM with native ES6 is a powerful tool which brings modern, efficient development to present day.

  • 0
  • 25
  • Brief

    When starting developing a project, JavaScript developer feels like blessed and cursed at same time. There is almost unlimited number of frameworks to choose, not counting other ‘helper’ tools. We have multiple package managers (npm, bower, also nuget in VS ecosphere) – not every package is present there, some must be downloaded directly from github or other repository.
    Also there are issues with managing code written in EcmaScript5. Developers often tend to avoid them by using meta-languages over JavaScript like TypeScript or CoffeScript or (which is somewhat a new way) EcmaScript6. Since browsers don’t recognize meta-languages nor ES6, developer must use compilers/transpilers to produce valid ES5 code utilizing Grunt, Gulp or other build tools. The solution grows along with its complication…

    JSPM

    Here comes the JSPM, the modern package manager build on the top of SystemJS. SystemJS is a module loader that implements es6-module-loader and it’s capable of loading variety of module formats:

    • AMD-style (RequireJS)
    • CommonJS (node.js)
    • ES6
    • Globals
    • … And even TypeScript/CoffeScript/React’s JSX directly!

    JSPM adds a fresh package manager ecosystem to SystemJS with auto-configuration made during package installation. In less advanced solutions, there is no need to modify a single line of module loader configuration manually.

    Installing JSPM

    The installation procedure is fairly straightforward. First, you need to install JSPM as a global NPM module:

    npm install –g jspm
    

    Now, having JSPM installed in the system, you can initialize its configuration. Navigate to root folder of your project and type

    jspm init
    

    The script will ask you few questions, from which the most important is the module format exposed by your scripts. After answering them, the JSPM will perform additional tasks:

    • Create config.js file, a configuration for SystemJS module loader
    • Create package.json file which holds information about your project
    • Download required packages: SystemJS, es6-module-loader and ES6 transpiler (Traceur or Babel, now identified as 6to5).

    Running

    The following piece of HTML is a minimum required to run application:

    index.html

    <!-- index.html -->
    <!DOCTYPE html>
    <html>
      <head>
        <script src="jspm_packages/system.js"></script>
        <script src="config.js"></script>
        <script>
          System.import('app/main');
        </script>
      </head>
      <body>
        <div id=”output”></div>
      </body>
    </html>
    

    app/main.js

    //main.js
    var element = document.getElementById(“output”);
    setInterval(() => {
        element.innerText = new Date().toString();
    }, 100);
    

    Wait, wait… what’s doing that lambda expression there? Is it JavaScript?

    Yes! It’s ES6 arrow function allowing “this” object to behave more consequently. In above example, it has no effect, but when classes come everything changes.

    Let’s try to open index.html in browser. You’ll see a simple clock displaying current date and time.

    If the example works it’s the time for the next step. I prepared a simple clock that utilizes jQuery and moment libraries. It’s written in ES6 and it’s ready to use with JSPM. First, install it:

    jspm install web-clock=github:orkisz/web-clock@0.1.0
    

    The above command will install web-clock package from my github repository and register it project-wide under name “web-clock”.

    Now you can modify app/main.js to use it:

    //main.js
    import wc from "web-clock";
    
    var clock = new wc("#output");
    
    clock.start();
    

    How JSPM will know what is under “web-clock”. Let’s see config.js file, under “map” section there should be a line:

    "web-clock": "github:orkisz/web-clock@0.1.0"
    

    If you browse project folder, you will realize there is a folder jspm_packages which holds all project’s dependences. The “web-clock” package is placed under github\orkisz\web-clock@0.1.0 directory. The pattern is [repository][repo_owner][repo_name]@[version].

    Advantages

    The biggest advantage over pair of CommonJS + Browserify is capability to run without build tool like Grunt or Gulp that precompiles file every time it changes. All the thing is done in browser. Naturally precompilation for production releases is also possible.
    The main benefit over RequireJS is easy and straightforward configuration. As was mentioned, in most cases there is no need to modify config.js file manually. Just install all dependences and voilà! In the case default module configuration does not fit your wants there is an easy way to replace it entirely or partially through local overrides.

    Conclusion

    Nowadays web applications can be so big that ES5 code become unmanageable. Structuring big client-side solutions is eased by TypeScript or CoffeScript but it’s not always an eligible solution. JSPM with native ES6 is a powerful tool which brings modern, efficient development to present day.

  • 0
  • 25

JSPM – develop the future. Right here. Right now.