• It is a general knowledge that the Alternative Data Rooms are really prevalent in our modern world. But some enterprises still cannot decide if they are going to turn to dealing with the Alternative Data Rooms. We think that they are just not aware of the strengths of the VDRs and the demerits of the conventional data rooms and other DWs. First of all, the online deal rooms possess the large numbers of capabilities which do not grant you the land-based venues and other data-warehousing systems. Thus, we reached a decision to recount all the capabilities of the Virtual Rooms in distinction from the ordinary depositories and other data-warehousing systems.

    • On circumstances that you have a deal with the traditional data room and would like to clinch the M&A bargains, you invite your future customers to analyze your deeds. On circumstances that they are from numerous countries, they are obliged to pay over the odds. With the Deal Rooms, the access to the data is possible in various parts of the world, so they can save a good deal of money and time. More than that, dealing with it, you are in a position to ameliorate the usefulness of your industry, attracting various companies to contact you.
    • With the Virtual Rooms, everything will be turned into reality very quickly. It is so for the reason that the team of the venture will classify your archive, the uploading of one gigabyte of the documents will take 1 second and the search systems will find everything as quickly as possible.
    • In our generation, dealing with the Deal Rooms virtual data room comparison, you can hold a parley with the clientage from numerous countries right in the Secure Online Data Room. In addition, you have the possibility to post the classified data. It is self-understood that you may deal with the several business sponsors synchronously, but they will have no notion of it. Doing it, you abdicate the hazards to back to square one. It can be done thanking the Q&A functionality. Could you realize it with the traditional data rooms?
    • Bandying about the charge, it is to emphasize that the Alternative data-warehousing systems are really not high-priced. As a general rule, the minimum price of the Alternative Data Rooms is about 100$/ per 31 days. In addition, you do not need to pay for the staff as it was with the traditional data rooms. Traditionally, they propose you manifold kinds of subscriptions, which will be effective for you. By the same token, the sublime virtual services possess the free tries. Exploring them, you are allowed to quiz the Virtual Data Room in advance of signing a contract.
    • On conditions that you would like to have a deal with the clients from various parts of the world, we advise you to think much about them. On the whole, the multiple languages support will be of use to them. More than that, some of the data rooms have their own machine translation systems.
    • As to the format, the deeds will be stored on the WWW because the VDRs are the sites. It of primal importance wherethrough you and your clientage are able to get acquainted with the files in numerous countries. Concerning the working with the land-based data rooms you were obliged to get acquainted with the files in one place. Moreover, owing to the fact that the digital phones are widely used in the present day, you have the possibility to work with the VDRs utilizing your mobile phones. You also have the possibility to work with your files situated on the netstick.
    • The protective system is an element which is highly important for deciding on the best online service. As regards the ordinary depositories, they are quite safe. But in relation to the other cloud drives, there no guarantee that you will not become a ravine of the data leakage. To elude these hazards, we think that you have to have a deal with the Virtual Repositories. Using such security rating as the VPP, authorization, and the customizable document watermarks, you will know that your documents are safe. Basically, the splendid virtual venues are certified, so you can rely on.
    • The Alternative data-warehousing systems present you the range of file formats which will be convenient for you. It also can be realized with the other data-warehousing systems, but the land-based repositories let you using exceptionally papers.
    • The Electronic Repositories are skilled enough to occupy themselves with various circles of action. They can be the investment banking, the restaurants or the power-generating sector or the silver service. On our point of view, the other data-warehousing systems are not ready to do it. On the other end of the spectrum, not all the data rooms can be engaged in all the realms, so pay heed to this fact while searching the sophisticated virtual venue.

    In view of this, it should be noted that the Virtual Platforms offer you much more effective functions relative to the conventional data rooms and other data vaults. By the same token, they are free to enhance the productivity of any orbit.

     

     

  • 0
  • 130

The Virtual Platforms in contrast to the conventional data rooms and other data-warehousing systems

  • Intro

    This blog post describes great JavaScript library – Ramda.js. I will introduce some basic functional programming concepts in order to give you a feeling what this style of writing code is about. You will find some useful links and sources which describe Ramda.js and functional programming in more detail at the end of the article.

    Before we start, if you want to play with Ramda instantly, you can go to the Ramda REPL which is great for a quick and iterative prototyping and experimenting process.

    Ramda.js – our hero

    Ramda.js is a practical functional library for Javascript programmers. It is a set of very useful functions which help you operate on data, transform it, handle it in a concise and elegant way. The term ‘data’ means all native Javascript objects, arrays, numbers, functions, etc.

    Let’s say we want a list of all properties which a JS object owns. We can use Ramda’s keys function:

    R.keys({a: 1, b: 2, c: 3});
    //=> ['a', 'b', 'c']

    R letter represents a Ramda library instance which was imported/loaded/initialized in some way. There are many ways you can do it – you can find all of them listed on the Ramda.js page. Now let’s take a look at some basic functional programming concepts and how they are utilized in Ramda.

    Higher order functions

    Higher order functions are just functions which either take another function as the input or return a function as the output. The great example is a classic map function:

    R.map(x => x * 2, [1, 2, 3]);
    //=> [2, 4, 6] 

    A new ES6 ‘fat arrow’ operator was used – it allows us to define lambda function, which is then passed to the map which is a higher order function. It’s worth mentioning that functions are first class citizens in JS, so we can rewrite it to:

    var double = x => x * 2;
    R.map(double, [1, 2, 3]);
    //=> [2, 4, 6]

    Map function accepts second argument which is an array on which it will operate. In our simple example map function will call double function for each element of the array and return a new array with transformed values.

    Just by introducing such a trivial example we have now two more concepts to cover. Let’s get cracking!

    Immutability

    If we took the last argument and assign it to the variable, and then pass that variable to the map function the result would be the same. However, original variable stays unchanged. You can easily check it using the code snippet below:

    var double = x => x * 2;
    var arr = [1, 2, 3];
    R.map(double, arr);
    //=> [2, 4, 6]
    // arr is still [1, 2, 3]

    When we operate on some data and it remains unchanged this means that the data is immutable. When our data is not being mutated (changed, modified) by design, it means our programs are safer. Generally avoiding side-effects is a good practice. Javascript data structures lack immutability. The good news is that Ramda’s functions do not mutate data by design and that way encourage more functional and safer programming style.

    Currying

    This mysterious term stands for a surname. Haskell Curry was a great American mathematician and ‘currying’ owes its name to him. It is a transformation which takes a function and returns another function. Input function can have many arguments – let’s say n. Output function has n-1 arguments, and when being called returns another function which accepts n–2 arguments, and so on, and so forth, until we finally return the result. Let’s take some simple example:

    var add = (a, b) => a + b;
    
    //transform add fn
    var curried = R.curry(add);
    var curriedLast = curried(5);
    var result = curriedLast(5);
    // result === 10
    
    curried(5)(5); //10
    curried(5, 5); //10

    We have simple add function defined as a lambda. We curry it and then call it subsequently with one argument until we obtain the result. That’s it!
    The cool Ramda’s feature is that we can call curried function as many times as many parameters original function had (two times for this example) or we can pass all parameters at once. That’s really comfortable.

    Why should we care about currying at all, though?

    The answer is: it encourages code reuse and reduces boilerplate.

    Let’s rewrite the very first example with map function. We will create doubler function which just takes an array and returns a new array with all parameters doubled:

    var double = x => x * 2;
    var doubler = function(arr) {
    return R.map(double, arr);
    }
    doubler([1, 2, 3]);
    //=> [2, 4, 6]

    Thanks to new ES6 syntax we could even do it better:

    doubler = arr => R.map(double, arr);

    It is much more concise. When using Ramda’s auto-currying feature we achieve the final effect:

    doubler = R.map(double);
    doubler([1, 2, 3]);
    //=> [2, 4, 6]

    I leave you with the question, which of all three styles is the most concise, clean and encourages code reuse.
    The only thing which is required in order to write using third style is to be aware of currying.

    Function-first, data-last API

    We will now focus on the arguments order. Do you remember our simple map example? What was the order of arguments that were being passed? The last argument was data, and the first argument was function. That is our function-first, data-last API! Indeed it is the detail, but quite important one. Especially in functional programming it is quite common to place arguments which change more frequently on the last position. Using that convention we can go more smoothly with the flow of the program, not introducing additional wrappers for functions in order to change order of arguments, which makes code less clean and unnecessarily complex.

    Function composition

    Last but not least comes function composition. This programming concept is all about taking output of one function and passing it as the input to another function. That way we can glue many functions together in order to return the final result.

    var mul2 = x => x * 2;
    var add1 = y => y + 1;
    mul2(add1(1));
    //=> 4

    Why not try using function composition?

    var add1mul2 = R.compose(mul2, add1);
    add1mul2(1);
    //=> 4

    Ramda’s compose function returned new function which is a composition of add1 and mul2. It means we take input (1) pass it to add1 and obtain the result (2) and pass it along to the mul2 which returns 4.

    Sometimes we need to use a complex function with many arguments. You could say that function composition is worthless in such cases. Remember, that you can always use currying earlier and compose partially applied functions in a neat and straightforward way.

    Summary

    We saw some basic functional programming concepts which are ‘bricks’ for building really complex systems. The same time those systems are also easier to maintain and refactor. Ramda uses those concepts heavily, therefore encourages us to write in a more functional manner using JavaScript. Thanks to Ramda we can achieve more clean, concise, robust, safe and elegant code. Furthermore we do not have any additional, external dependencies beyond Ramda.js which is a self-contained library.

    Bibliography

    I gave you really trivial examples in order to explain some concepts and make them as clean and straightforward as possible. If you want to see Ramda in real application, I do recommend reading another blog post. Then you will hopefully appreciate the great value which Ramda.js provides to the developer.

    Below you can find the list of all links and resources which were inspiration for this blog post and corresponding lightning talk which took place on the Source The Way Forward Conference in November 2015.

  • 0
  • 86
  • Intro

    This blog post describes great JavaScript library – Ramda.js. I will introduce some basic functional programming concepts in order to give you a feeling what this style of writing code is about. You will find some useful links and sources which describe Ramda.js and functional programming in more detail at the end of the article.

    Before we start, if you want to play with Ramda instantly, you can go to the Ramda REPL which is great for a quick and iterative prototyping and experimenting process.

    Ramda.js – our hero

    Ramda.js is a practical functional library for Javascript programmers. It is a set of very useful functions which help you operate on data, transform it, handle it in a concise and elegant way. The term ‘data’ means all native Javascript objects, arrays, numbers, functions, etc.

    Let’s say we want a list of all properties which a JS object owns. We can use Ramda’s keys function:

    R.keys({a: 1, b: 2, c: 3});
    //=> ['a', 'b', 'c']

    R letter represents a Ramda library instance which was imported/loaded/initialized in some way. There are many ways you can do it – you can find all of them listed on the Ramda.js page. Now let’s take a look at some basic functional programming concepts and how they are utilized in Ramda.

    Higher order functions

    Higher order functions are just functions which either take another function as the input or return a function as the output. The great example is a classic map function:

    R.map(x => x * 2, [1, 2, 3]);
    //=> [2, 4, 6] 

    A new ES6 ‘fat arrow’ operator was used – it allows us to define lambda function, which is then passed to the map which is a higher order function. It’s worth mentioning that functions are first class citizens in JS, so we can rewrite it to:

    var double = x => x * 2;
    R.map(double, [1, 2, 3]);
    //=> [2, 4, 6]

    Map function accepts second argument which is an array on which it will operate. In our simple example map function will call double function for each element of the array and return a new array with transformed values.

    Just by introducing such a trivial example we have now two more concepts to cover. Let’s get cracking!

    Immutability

    If we took the last argument and assign it to the variable, and then pass that variable to the map function the result would be the same. However, original variable stays unchanged. You can easily check it using the code snippet below:

    var double = x => x * 2;
    var arr = [1, 2, 3];
    R.map(double, arr);
    //=> [2, 4, 6]
    // arr is still [1, 2, 3]

    When we operate on some data and it remains unchanged this means that the data is immutable. When our data is not being mutated (changed, modified) by design, it means our programs are safer. Generally avoiding side-effects is a good practice. Javascript data structures lack immutability. The good news is that Ramda’s functions do not mutate data by design and that way encourage more functional and safer programming style.

    Currying

    This mysterious term stands for a surname. Haskell Curry was a great American mathematician and ‘currying’ owes its name to him. It is a transformation which takes a function and returns another function. Input function can have many arguments – let’s say n. Output function has n-1 arguments, and when being called returns another function which accepts n–2 arguments, and so on, and so forth, until we finally return the result. Let’s take some simple example:

    var add = (a, b) => a + b;
    
    //transform add fn
    var curried = R.curry(add);
    var curriedLast = curried(5);
    var result = curriedLast(5);
    // result === 10
    
    curried(5)(5); //10
    curried(5, 5); //10

    We have simple add function defined as a lambda. We curry it and then call it subsequently with one argument until we obtain the result. That’s it!
    The cool Ramda’s feature is that we can call curried function as many times as many parameters original function had (two times for this example) or we can pass all parameters at once. That’s really comfortable.

    Why should we care about currying at all, though?

    The answer is: it encourages code reuse and reduces boilerplate.

    Let’s rewrite the very first example with map function. We will create doubler function which just takes an array and returns a new array with all parameters doubled:

    var double = x => x * 2;
    var doubler = function(arr) {
    return R.map(double, arr);
    }
    doubler([1, 2, 3]);
    //=> [2, 4, 6]

    Thanks to new ES6 syntax we could even do it better:

    doubler = arr => R.map(double, arr);

    It is much more concise. When using Ramda’s auto-currying feature we achieve the final effect:

    doubler = R.map(double);
    doubler([1, 2, 3]);
    //=> [2, 4, 6]

    I leave you with the question, which of all three styles is the most concise, clean and encourages code reuse.
    The only thing which is required in order to write using third style is to be aware of currying.

    Function-first, data-last API

    We will now focus on the arguments order. Do you remember our simple map example? What was the order of arguments that were being passed? The last argument was data, and the first argument was function. That is our function-first, data-last API! Indeed it is the detail, but quite important one. Especially in functional programming it is quite common to place arguments which change more frequently on the last position. Using that convention we can go more smoothly with the flow of the program, not introducing additional wrappers for functions in order to change order of arguments, which makes code less clean and unnecessarily complex.

    Function composition

    Last but not least comes function composition. This programming concept is all about taking output of one function and passing it as the input to another function. That way we can glue many functions together in order to return the final result.

    var mul2 = x => x * 2;
    var add1 = y => y + 1;
    mul2(add1(1));
    //=> 4

    Why not try using function composition?

    var add1mul2 = R.compose(mul2, add1);
    add1mul2(1);
    //=> 4

    Ramda’s compose function returned new function which is a composition of add1 and mul2. It means we take input (1) pass it to add1 and obtain the result (2) and pass it along to the mul2 which returns 4.

    Sometimes we need to use a complex function with many arguments. You could say that function composition is worthless in such cases. Remember, that you can always use currying earlier and compose partially applied functions in a neat and straightforward way.

    Summary

    We saw some basic functional programming concepts which are ‘bricks’ for building really complex systems. The same time those systems are also easier to maintain and refactor. Ramda uses those concepts heavily, therefore encourages us to write in a more functional manner using JavaScript. Thanks to Ramda we can achieve more clean, concise, robust, safe and elegant code. Furthermore we do not have any additional, external dependencies beyond Ramda.js which is a self-contained library.

    Bibliography

    I gave you really trivial examples in order to explain some concepts and make them as clean and straightforward as possible. If you want to see Ramda in real application, I do recommend reading another blog post. Then you will hopefully appreciate the great value which Ramda.js provides to the developer.

    Below you can find the list of all links and resources which were inspiration for this blog post and corresponding lightning talk which took place on the Source The Way Forward Conference in November 2015.

  • 0
  • 86

RAMDA.JS – Short introduction to functional programming

  • What’s new in JVM

    Recently, I had the opportunity to attend the JDD conference in Krakow. Among many interesting presentations given there, the one given by Jaroslav Tulach from Oracle Labs stood out. Unlike most of others, this speaker is in the position to reveal some information about what is in the works at Oracle, and provide the audience with some insights into the direction in which the entire Java ecosystem is going. I hope my short description of it will be interesting.

    More and more development on JVM is done in languages with dynamic type checking like Ruby, Python, JavaScript or even niche ones like R, used for statistical calculations. Execution of such languages on the current version of JVM suboptimal, because it usually requires writing an interpreter in Java. In this approach the JVM doesn’t see the actual application code, so it can’t perform any optimisations based on that knowledge. Additionally, in such model interoperability between different languages is very problematic. Jaroslav’s team goal is to address the problems with performance using Grall project and improve interoperability using Polyglot engine.

    Disclaimer

    This article is based only on my understanding of the presentation and the information on the project website. It is not endorsed or approved by Oracle. It is entirely possible I misunderstood something, and the presentation itself came with a lenghty disclaimer reminding that all the features are still in development. This article is for information only.

    Grall and Truffle

    The Grall project is a new JIT (just in time) compiler for the HotSpot. The job of the JIT compiler is to turn Java bytecode into native code, which can be executed by the CPU. Unlike HotSpot, it is written in Java, and provides a JAVA API called Truffle, which can access many of the compilers features. This API can be used to describe the syntax of another language like Ruby or JavaScript, and feed this information into the compiler along with information about possible runtime optimisations. For example, take expression “A + B”. In a dynamic language this expression may do very different things, depending on the run-time type of variables A and B: if they are numbers, they should be added, if they are strings, they should be concatenated and if A is a string and B is a number, then B should be converted to a string and appended to A. Using Truffle API, it is possible to give JVM this information, so that if the expression “A + B” is seen and it is known that A and B are integers, then the expression will be replaced by a simple addition.

    However, later in the execution of the program values A or B may no longer be integers. This situation must be detected, and the compiled expression needs to be deoptimised back to the more general implementation. Truffle API provides a way to describe deoptimisation conditions using annotations. Additionally, Grall developers claim it excels at handling such optimisations and deoptimisations in a safe and effective manner. One of the ways to do so is speculative execution of the code. For example, it is possible to execute some part of the syntax tree with hope that some specific types are used in it. If the assumption is incorrect, an exception is thrown and the code is executed again without optimisations.

    Polyglot

    Today’s developers tend to be picky when it comes to languages. For example, web developers used to writing JavaScript for the clinet side often feel like using it on the server side as well. To enable this, Graal contains a JavaScirpt engine called Grall.js. Node.js is not yet fully supported by it, but it is said to be in progress, and it uses the performance improvements for dynamic languages outlined above. Clearly, the goal of all this is to have a JavaScript implementation, which is well integrated with Java technologies on the server side.

    The goal of the Polyglot engine is to improve this integration by giving different languages ability to share objects. Expressions in different languages can be imported into the engine and become visible in it. All the standard JVM tools, like debuggers and profilers are planned to work across the different languages as well. To see where this is going, you can watch this video, in which Jaroslav shows cross-language debugging using NetBeans IDE.

    Conclusion

    The goals of the Grall project are rather ambitious, especially when it comes to the preformance. Personally, I am most looking forward to the interoperability features Polyglot promises. I hope they may help to decrease fragmentation of current JVM ecosystem, which currently is pretty horrible. I wonder how these interoperability features compare to what is available in other environments, like .NET platform. If you have any thoughts about how interoperability and cross-laguage debugging support work in other application virtual machines, please let me know in the comments.

  • 0
  • 113
  • What’s new in JVM

    Recently, I had the opportunity to attend the JDD conference in Krakow. Among many interesting presentations given there, the one given by Jaroslav Tulach from Oracle Labs stood out. Unlike most of others, this speaker is in the position to reveal some information about what is in the works at Oracle, and provide the audience with some insights into the direction in which the entire Java ecosystem is going. I hope my short description of it will be interesting.

    More and more development on JVM is done in languages with dynamic type checking like Ruby, Python, JavaScript or even niche ones like R, used for statistical calculations. Execution of such languages on the current version of JVM suboptimal, because it usually requires writing an interpreter in Java. In this approach the JVM doesn’t see the actual application code, so it can’t perform any optimisations based on that knowledge. Additionally, in such model interoperability between different languages is very problematic. Jaroslav’s team goal is to address the problems with performance using Grall project and improve interoperability using Polyglot engine.

    Disclaimer

    This article is based only on my understanding of the presentation and the information on the project website. It is not endorsed or approved by Oracle. It is entirely possible I misunderstood something, and the presentation itself came with a lenghty disclaimer reminding that all the features are still in development. This article is for information only.

    Grall and Truffle

    The Grall project is a new JIT (just in time) compiler for the HotSpot. The job of the JIT compiler is to turn Java bytecode into native code, which can be executed by the CPU. Unlike HotSpot, it is written in Java, and provides a JAVA API called Truffle, which can access many of the compilers features. This API can be used to describe the syntax of another language like Ruby or JavaScript, and feed this information into the compiler along with information about possible runtime optimisations. For example, take expression “A + B”. In a dynamic language this expression may do very different things, depending on the run-time type of variables A and B: if they are numbers, they should be added, if they are strings, they should be concatenated and if A is a string and B is a number, then B should be converted to a string and appended to A. Using Truffle API, it is possible to give JVM this information, so that if the expression “A + B” is seen and it is known that A and B are integers, then the expression will be replaced by a simple addition.

    However, later in the execution of the program values A or B may no longer be integers. This situation must be detected, and the compiled expression needs to be deoptimised back to the more general implementation. Truffle API provides a way to describe deoptimisation conditions using annotations. Additionally, Grall developers claim it excels at handling such optimisations and deoptimisations in a safe and effective manner. One of the ways to do so is speculative execution of the code. For example, it is possible to execute some part of the syntax tree with hope that some specific types are used in it. If the assumption is incorrect, an exception is thrown and the code is executed again without optimisations.

    Polyglot

    Today’s developers tend to be picky when it comes to languages. For example, web developers used to writing JavaScript for the clinet side often feel like using it on the server side as well. To enable this, Graal contains a JavaScirpt engine called Grall.js. Node.js is not yet fully supported by it, but it is said to be in progress, and it uses the performance improvements for dynamic languages outlined above. Clearly, the goal of all this is to have a JavaScript implementation, which is well integrated with Java technologies on the server side.

    The goal of the Polyglot engine is to improve this integration by giving different languages ability to share objects. Expressions in different languages can be imported into the engine and become visible in it. All the standard JVM tools, like debuggers and profilers are planned to work across the different languages as well. To see where this is going, you can watch this video, in which Jaroslav shows cross-language debugging using NetBeans IDE.

    Conclusion

    The goals of the Grall project are rather ambitious, especially when it comes to the preformance. Personally, I am most looking forward to the interoperability features Polyglot promises. I hope they may help to decrease fragmentation of current JVM ecosystem, which currently is pretty horrible. I wonder how these interoperability features compare to what is available in other environments, like .NET platform. If you have any thoughts about how interoperability and cross-laguage debugging support work in other application virtual machines, please let me know in the comments.

  • 0
  • 113

Developments in the JVM for Dynamic Languages

  • Fortran 77 adventure

    During last few months I’ve spent over 100h developing Fortran 77 codebase due to scientific obligations. I would like to share briefly my experiences and show few interesting features of this forgotten language.

    Intro

    Fortran has been developed in 1950s by IBM. It’s imperative programming language, initially only procedural – now described as general-purpose with ability to write code in many paradigms. Fortran has long history in two distintive fields: numeric computations and scientific computing. It’s famous for wide range of mathematical libraries which are one-of-a-kind (nobody dare to rewrite them…) and language used to benchmark supercomputers (like brand new Prometheus on AGH in Cracow).

    First blood

    I’ve inherited over 3000 lines of code responsibe for highly specific physics calculations. All of it has been written in one single file. First line has been written over 10 years ago and since then file has been changed multiple times with myriad of people of multiple nationalities.
    After quick inspection I noticed that there are no external libraries required. “Compilation should be piece of cake” I thought. I downloaded newest version of gfortran. Write proper command in terminal and received failure with message:

    double complex arr(0:1, 0:1, 0:1, 0:1, 0:1, 0:1, 0:1, 0:1)
                                                          1
    Array specification at (1) has more than 7 dimensions
    

    I trusted person responsible for providing me the file and was certain that compilation should pass without fuss. After some digging, checking available compilers I determined that code was compiled with commercial one from Intel ($699 – this language supposed to be dead…). I made my mind and started to digging more and more into specifics of the language to fix all errors and adjust code to open source compiler. I discovered that Fortran 77 is not named by accident and ‘7’ has deeper meaning:

    • arrays may have up to 7 dimensions (see error above)
    • variable name cannot be longer than 6 characters (but I like to think about them like “up to 7 excluded”)
    • statements must start from 7 column and must end at 72 column
    • first appeared in public in 1957

    What will happen when we write longer lines? Characters placed further than 72 column will be just ignored. It sometimes cause very nasty bugs. What to do when we want to add many variables? Continuation sign to the rescue! When we place any character in 6 column compiler will treat marked line as continuation. In below example we use dot which is very popular in Fortran community.

          SUM = VAR1 + VAR2
         . + VAR3
    


    “Can you see how beautiful Fortran code looks in terminal?”

    To sum up: Fortran is not free-format language and as far as I know all constraints are introduced for convinient editing from terminal editors (like vim and emacs), which you use a lot when performing massive calculations. Below sample code (from Wikipedia):

    C AREA OF A TRIANGLE - HERON'S FORMULA
    C INPUT - CARD READER UNIT 5, INTEGER INPUT, NO BLANK CARD FOR END OF DATA
    C OUTPUT - LINE PRINTER UNIT 6, REAL OUTPUT
    C INPUT ERROR DISPAYS ERROR MESSAGE ON OUTPUT
      501 FORMAT(3I5)
      601 FORMAT(" A= ",I5,"  B= ",I5,"  C= ",I5,"  AREA= ",F10.2,"SQUARE UNITS")
      602 FORMAT("NORMAL END")
      603 FORMAT("INPUT ERROR OR ZERO VALUE ERROR")
          INTEGER A,B,C
       10 READ(5,501,END=50,ERR=90) A,B,C
          IF(A=0 .OR. B=0 .OR. C=0) GO TO 90
          S = (A + B + C) / 2.0
          AREA = SQRT( S * (S - A) * (S - B) * (S - C) )  
          WRITE(6,601) A,B,C,AREA
          GO TO 10
       50 WRITE(6,602)
          STOP
       90 WRITE(6,603)
          STOP
          END
    

    Default type

    Codebase I needed to change had been developed by people not skilled in Software Craftmanship. Most variables are globally scoped, naming convetions are missing, some variables are not declared at all. When compiler cannot find variable declaration it uses “default type” which is interfered from variable name. If variable name starts with letter “I” or “N” it’s gonna be Integer in other cases it will be promoted to Real type. Compiler alone has quite bunch of Dialect Options which are very useful and very dangerous. For example default type length can be choosen from command line like so :

    gfortran -fdefault-real-8 -fdefault-double-8 program.f
    

    Type length it may and should be fixed in code during variable declaration. Sometimes code authors get lazy, build steps are lost and hell break loose…

    Common blocks

    One feature called “Common block” impressed me the most. It is popular practice that instead of providing long list of arguments, we use configuration object with many fields. In other word instead of writing (JavaScript used for brevity):
    function (a, b, c) {}
    We use helper object sometimes called Config object:

    var obj = {a: '...', b: '...', c: '...'};
    function(obj) {}
    

    Fortran authors decided to go one step further and instead of passing object decided that it will be useful to name part of memory and allow to globally address it whereever it’s necessary. Sample:

    !GLOBAL SCOPE
    REAL ARRAY1(100) !Array with one hundred real values
    INTEGER VARIABLE2 !Som einteger
    COMMON/MYMESH/ ARRAY1,VARIABLE2
    
    !... some code ...
    
    FUNCTION FOO() 
        REAL ARRAY2(100)
        INTEGER VARIABLE2
        COMMON/MYMESH/ ARRAY2,VARIABLE2
    END FUNCTION
    

    And just like that we do not have to worry about long lines. You may wonder why all comments are proceded with ! (exclamation mark). In Fortran 77 standard I should place any character in first column to make it a comment. Here we convinently used compiler feature which allows to use ! in any place.

    Summary

    Language itself evolve during last few decades (current stable version is named Fortran 2008) it looks more friendly than good ol’ 77 but spirit of mathematical calculations is properly preserved. Even though I enjoyed my brief adventure I prefer working with languages which promote better coding practices and hide dangerous features a little bit deeper.

  • 0
  • 17
  • Fortran 77 adventure

    During last few months I’ve spent over 100h developing Fortran 77 codebase due to scientific obligations. I would like to share briefly my experiences and show few interesting features of this forgotten language.

    Intro

    Fortran has been developed in 1950s by IBM. It’s imperative programming language, initially only procedural – now described as general-purpose with ability to write code in many paradigms. Fortran has long history in two distintive fields: numeric computations and scientific computing. It’s famous for wide range of mathematical libraries which are one-of-a-kind (nobody dare to rewrite them…) and language used to benchmark supercomputers (like brand new Prometheus on AGH in Cracow).

    First blood

    I’ve inherited over 3000 lines of code responsibe for highly specific physics calculations. All of it has been written in one single file. First line has been written over 10 years ago and since then file has been changed multiple times with myriad of people of multiple nationalities.
    After quick inspection I noticed that there are no external libraries required. “Compilation should be piece of cake” I thought. I downloaded newest version of gfortran. Write proper command in terminal and received failure with message:

    double complex arr(0:1, 0:1, 0:1, 0:1, 0:1, 0:1, 0:1, 0:1)
                                                          1
    Array specification at (1) has more than 7 dimensions
    

    I trusted person responsible for providing me the file and was certain that compilation should pass without fuss. After some digging, checking available compilers I determined that code was compiled with commercial one from Intel ($699 – this language supposed to be dead…). I made my mind and started to digging more and more into specifics of the language to fix all errors and adjust code to open source compiler. I discovered that Fortran 77 is not named by accident and ‘7’ has deeper meaning:

    • arrays may have up to 7 dimensions (see error above)
    • variable name cannot be longer than 6 characters (but I like to think about them like “up to 7 excluded”)
    • statements must start from 7 column and must end at 72 column
    • first appeared in public in 1957

    What will happen when we write longer lines? Characters placed further than 72 column will be just ignored. It sometimes cause very nasty bugs. What to do when we want to add many variables? Continuation sign to the rescue! When we place any character in 6 column compiler will treat marked line as continuation. In below example we use dot which is very popular in Fortran community.

          SUM = VAR1 + VAR2
         . + VAR3
    


    “Can you see how beautiful Fortran code looks in terminal?”

    To sum up: Fortran is not free-format language and as far as I know all constraints are introduced for convinient editing from terminal editors (like vim and emacs), which you use a lot when performing massive calculations. Below sample code (from Wikipedia):

    C AREA OF A TRIANGLE - HERON'S FORMULA
    C INPUT - CARD READER UNIT 5, INTEGER INPUT, NO BLANK CARD FOR END OF DATA
    C OUTPUT - LINE PRINTER UNIT 6, REAL OUTPUT
    C INPUT ERROR DISPAYS ERROR MESSAGE ON OUTPUT
      501 FORMAT(3I5)
      601 FORMAT(" A= ",I5,"  B= ",I5,"  C= ",I5,"  AREA= ",F10.2,"SQUARE UNITS")
      602 FORMAT("NORMAL END")
      603 FORMAT("INPUT ERROR OR ZERO VALUE ERROR")
          INTEGER A,B,C
       10 READ(5,501,END=50,ERR=90) A,B,C
          IF(A=0 .OR. B=0 .OR. C=0) GO TO 90
          S = (A + B + C) / 2.0
          AREA = SQRT( S * (S - A) * (S - B) * (S - C) )  
          WRITE(6,601) A,B,C,AREA
          GO TO 10
       50 WRITE(6,602)
          STOP
       90 WRITE(6,603)
          STOP
          END
    

    Default type

    Codebase I needed to change had been developed by people not skilled in Software Craftmanship. Most variables are globally scoped, naming convetions are missing, some variables are not declared at all. When compiler cannot find variable declaration it uses “default type” which is interfered from variable name. If variable name starts with letter “I” or “N” it’s gonna be Integer in other cases it will be promoted to Real type. Compiler alone has quite bunch of Dialect Options which are very useful and very dangerous. For example default type length can be choosen from command line like so :

    gfortran -fdefault-real-8 -fdefault-double-8 program.f
    

    Type length it may and should be fixed in code during variable declaration. Sometimes code authors get lazy, build steps are lost and hell break loose…

    Common blocks

    One feature called “Common block” impressed me the most. It is popular practice that instead of providing long list of arguments, we use configuration object with many fields. In other word instead of writing (JavaScript used for brevity):
    function (a, b, c) {}
    We use helper object sometimes called Config object:

    var obj = {a: '...', b: '...', c: '...'};
    function(obj) {}
    

    Fortran authors decided to go one step further and instead of passing object decided that it will be useful to name part of memory and allow to globally address it whereever it’s necessary. Sample:

    !GLOBAL SCOPE
    REAL ARRAY1(100) !Array with one hundred real values
    INTEGER VARIABLE2 !Som einteger
    COMMON/MYMESH/ ARRAY1,VARIABLE2
    
    !... some code ...
    
    FUNCTION FOO() 
        REAL ARRAY2(100)
        INTEGER VARIABLE2
        COMMON/MYMESH/ ARRAY2,VARIABLE2
    END FUNCTION
    

    And just like that we do not have to worry about long lines. You may wonder why all comments are proceded with ! (exclamation mark). In Fortran 77 standard I should place any character in first column to make it a comment. Here we convinently used compiler feature which allows to use ! in any place.

    Summary

    Language itself evolve during last few decades (current stable version is named Fortran 2008) it looks more friendly than good ol’ 77 but spirit of mathematical calculations is properly preserved. Even though I enjoyed my brief adventure I prefer working with languages which promote better coding practices and hide dangerous features a little bit deeper.

  • 0
  • 17

Fortran 77 adventure

  • Hybrid mobile applications

    These days there is a tendency to write everything using single programming language, single framework. JavaScript is not the only one language that is trying to conquer every possible programming area. In the past there were also similar attempts to develop all kind of applications in one language/framework. There was ASP.NET for win-forms developers which isolated them from web mechanics and made them easily adopting to new environment. As any such attempt it had the same problems. At the beginning all looks nice and promising in commercials, on tech talks, conferences but what we see is only colourful, shiny package of what THEY want us to see. It is like taking first shot from drug dealer for free. Everything looks nice and bright, world is better place to live, but it is only the first impression and it has hidden debt. When your application is growing and your client’s requirements become more custom, not so trivial and typical, it is turning out that you have to look under the cover of nice abstraction given you by the platform. After that you realize that under this nice and shiny abstraction there is hell and you have to pay your debt back.

    I have similar experience with dealing with hybrid mobile app. I have started with fascination and enthusiasm – OMG I can develop with JavaScript simultaneously two applications for both Android and IOS! What is more, if customer want I could also easily migrate to another platform like blackberry, kindle or windows phone. I have neither knowledge of android development nor knowledge of java and objective-c, but I have cordova which will do all of the dirty job for me. Isn’t that sweet?

    How it works

    In my project I’ve been using Ionic platform. Ionic is mix of plugins, tools, frameworks that allows you to create your application prototype really fast and you have all you need in one package. It consists of cordova (which is phonegap engine), angular plus a lot of mobile friendly controls (written as angular directives) and some scripts that helps to build and deploy application from command line. Ionic comes as npm package. What you only need is to install it globally via npm and then you can start working with it. Creating your first appication is easy-peasy. All you need to do is:

    $ ionic start myApp
    

    …and booyah! – you are done. Ionic will setup everything you need for your first application. Folder structure of Ionic project looks like below:

    \---myApp
        +---hooks
        |   +---after_platform_add
        |   \---after_prepare
        +---plugins
        |   +---com.ionic.keyboard
        |   +---cordova-plugin-console
        |   +---cordova-plugin-device
        |   +---cordova-plugin-splashscreen
        |   \---cordova-plugin-whitelist
        |       \---src
        |           +---android
        |           \---ios
        +---scss
        \---www
            +---css
            +---img
            +---js
            +---lib
            \---templates
    
    hooks

    It contains JavaScript code to be executed during building of process life cycle, it is mechanism provided by cordova not ionic itself. Ionic is providing only some additional hooks scripts. Hooks folder contains sub folders which naming is crucial because cordova is executing this script in appropriate order based on this sub folders name convention. Be aware that hooks provided by ionic can have bugs.

    plugins

    This folder contains all the glue code plugins. Javascript cannot call directly mobile device API, like filesystem API, GPS or others. To achieve this, cordova provides a lot of plugins written in native code corresponding to the platforms. For instance GPS plugin consists of code written in objective-c for IOS, the same functionality written in java for android and so on. This native code is responsible for calling device API and for being a mediator between platform and your javascript. Cordova have public plugins repository and package manager. Cordova plugin manager works similar to npm, so it is good idea to exclude plugins directory from source control. What is more Ionic comes with some hooks that helps dealing with this solution. Whenever you are installing plugin, hook script is adding this plugin’s name to package.json file. With this approach when your colleague downloads latest source code from git, he will be able tp install all needed plugins. Unfortunately this hook hand some crucial bug. Below code of this hook:

    var exec = require('child_process').exec;
    var path = require('path');
    var sys = require('sys');
    
    var packageJSON = null;
    
    try {
        packageJSON = require('../../package.json');
    } catch (ex) {
        console.log('\nThere was an error fetching your package.json file.');
        console.log('\nPlease ensure a valid package.json is in the root of this project\n');
        return;
    }
    
    var cmd = process.platform === 'win32' ? 'cordova.cmd' : 'cordova';
    // var script = path.resolve(__dirname, '../../node_modules/cordova/bin', cmd);
    
    packageJSON.cordovaPlugins = packageJSON.cordovaPlugins || [];
    packageJSON.cordovaPlugins.forEach(function (plugin) {
        var params = plugin.split('|');
        var pl = params[0];
        var val = params.length > 0 ? params[1] : undefined;
        var command = pl + (val != undefined ? (" --variable " + val) : "");
    
        exec('cordova plugin add ' + command, function (error, stdout, stderr) {
            sys.puts(stdout);
        });
    });
    

    Nothing special in here. We lost a lot of hair because of this above. Problem with this code is that it is running async and downloading plugin from repository is not the end of the work. After installation, each plugin is making changes in configuration file. It’s typical race condition problem, lot of threads are trying to write to the same file resource at once. To fix it, just change exec function to execSync.

    scss

    ionic by default will add for you sass sytle sheets + gulp to compile them and output css to www folder.

    www

    in this folder all source files are placed, assets, images + libs. Third party libraries will be located in lib subfolder. Ionic is using bower for downloading thir party libs.

    platform

    this is like build folder.

    $ ionic platform add android
    

    With above command ionic will autogenerate android studio solution with all necessary files. It will copy all plugins, www folder with our code and wrap it with code that will host project as fullscreen webview controll. In the end its just Android studio project so we can open in it debug plguins code etc.

    Pain points of working with Ionic

    performance

    It’s in the end only javascript and html so it will never be such fast as native application, but major impact on performace in Ionic comes not from html or js. Guys from ionic decided to write as much as possible to provide the whole set of any type’s controls. But they didn’t care about performance… If you look on timeline in chrome devtool you could be really surprised how badly these controls are written. There is a lot of unnecessary timers, triggers. Even single button click in ionic triggers thousands of operations. Code is really crappy so if you do not want to use hundreds of mobile specific controls I would suggest you to not use ionic at all. You can simply achieve the same functionality using separately cordova + angular + some other controls library like mobile angular UI.

    leaking abstraction

    There is thousands of plugins available out there, but in the end you always needs something custom or tweak a bit already existing plugin. In this scenario you have to get your hands dirty and write some java code and objective-c code debug it in XCode or Android Stuido etc.

    build process

    Ionic comes with some build process, but it is not fully configurable and extendible. If you want to have more control about your assets pipeline, it is better to use gulp. In this scenario you have to introduce another folder which will contain your source code and then compile it minify and output to www folder for further cordova processing. Otherwise with custom Ionic build process you cannot minify your scripts. You cannot do nothing with bower components, so all garbage (tests, images, examples, docs) that bower downloads will be in your distribution package.

    developing process

    Specially when you are developing in IOS you must to be prepared for a lot of frustration. You are not able to develop on PC, you have to have MAC machine. It forces you to get familiar with crazy security policy provided by apple. You have to generate thousands of certificates, provisioning profiles and similar to publish your application, despite the fact that switching from mac to pc developing is also slow because long testing loops. You have to build package, deploy on device, attach chrome remote debugger etc and it’s time consuming.

    Good sides of hybrid application

    For some purposes it’s really good alternative for native application. When your application is rather typical, you don’t wanna fancy animation or some custom behavior, custom functionality – hybrid application is for you. So it depends on scenario. There is no silver bullet and for some scenarios when you are using it reasonable it’s good solution.

    My recomendations

    Don’t use ionic. You can start with ionic application just for test, to get familiar with hybrid application and how all of this works, but when you want to build production ready application just grab all needed pieces by yourself. Take cordova or phonegap, grap gulp or grunt, grab angular or react or whatever and build your own working tools set.

  • 0
  • 13
  • Hybrid mobile applications

    These days there is a tendency to write everything using single programming language, single framework. JavaScript is not the only one language that is trying to conquer every possible programming area. In the past there were also similar attempts to develop all kind of applications in one language/framework. There was ASP.NET for win-forms developers which isolated them from web mechanics and made them easily adopting to new environment. As any such attempt it had the same problems. At the beginning all looks nice and promising in commercials, on tech talks, conferences but what we see is only colourful, shiny package of what THEY want us to see. It is like taking first shot from drug dealer for free. Everything looks nice and bright, world is better place to live, but it is only the first impression and it has hidden debt. When your application is growing and your client’s requirements become more custom, not so trivial and typical, it is turning out that you have to look under the cover of nice abstraction given you by the platform. After that you realize that under this nice and shiny abstraction there is hell and you have to pay your debt back.

    I have similar experience with dealing with hybrid mobile app. I have started with fascination and enthusiasm – OMG I can develop with JavaScript simultaneously two applications for both Android and IOS! What is more, if customer want I could also easily migrate to another platform like blackberry, kindle or windows phone. I have neither knowledge of android development nor knowledge of java and objective-c, but I have cordova which will do all of the dirty job for me. Isn’t that sweet?

    How it works

    In my project I’ve been using Ionic platform. Ionic is mix of plugins, tools, frameworks that allows you to create your application prototype really fast and you have all you need in one package. It consists of cordova (which is phonegap engine), angular plus a lot of mobile friendly controls (written as angular directives) and some scripts that helps to build and deploy application from command line. Ionic comes as npm package. What you only need is to install it globally via npm and then you can start working with it. Creating your first appication is easy-peasy. All you need to do is:

    $ ionic start myApp
    

    …and booyah! – you are done. Ionic will setup everything you need for your first application. Folder structure of Ionic project looks like below:

    \---myApp
        +---hooks
        |   +---after_platform_add
        |   \---after_prepare
        +---plugins
        |   +---com.ionic.keyboard
        |   +---cordova-plugin-console
        |   +---cordova-plugin-device
        |   +---cordova-plugin-splashscreen
        |   \---cordova-plugin-whitelist
        |       \---src
        |           +---android
        |           \---ios
        +---scss
        \---www
            +---css
            +---img
            +---js
            +---lib
            \---templates
    
    hooks

    It contains JavaScript code to be executed during building of process life cycle, it is mechanism provided by cordova not ionic itself. Ionic is providing only some additional hooks scripts. Hooks folder contains sub folders which naming is crucial because cordova is executing this script in appropriate order based on this sub folders name convention. Be aware that hooks provided by ionic can have bugs.

    plugins

    This folder contains all the glue code plugins. Javascript cannot call directly mobile device API, like filesystem API, GPS or others. To achieve this, cordova provides a lot of plugins written in native code corresponding to the platforms. For instance GPS plugin consists of code written in objective-c for IOS, the same functionality written in java for android and so on. This native code is responsible for calling device API and for being a mediator between platform and your javascript. Cordova have public plugins repository and package manager. Cordova plugin manager works similar to npm, so it is good idea to exclude plugins directory from source control. What is more Ionic comes with some hooks that helps dealing with this solution. Whenever you are installing plugin, hook script is adding this plugin’s name to package.json file. With this approach when your colleague downloads latest source code from git, he will be able tp install all needed plugins. Unfortunately this hook hand some crucial bug. Below code of this hook:

    var exec = require('child_process').exec;
    var path = require('path');
    var sys = require('sys');
    
    var packageJSON = null;
    
    try {
        packageJSON = require('../../package.json');
    } catch (ex) {
        console.log('\nThere was an error fetching your package.json file.');
        console.log('\nPlease ensure a valid package.json is in the root of this project\n');
        return;
    }
    
    var cmd = process.platform === 'win32' ? 'cordova.cmd' : 'cordova';
    // var script = path.resolve(__dirname, '../../node_modules/cordova/bin', cmd);
    
    packageJSON.cordovaPlugins = packageJSON.cordovaPlugins || [];
    packageJSON.cordovaPlugins.forEach(function (plugin) {
        var params = plugin.split('|');
        var pl = params[0];
        var val = params.length > 0 ? params[1] : undefined;
        var command = pl + (val != undefined ? (" --variable " + val) : "");
    
        exec('cordova plugin add ' + command, function (error, stdout, stderr) {
            sys.puts(stdout);
        });
    });
    

    Nothing special in here. We lost a lot of hair because of this above. Problem with this code is that it is running async and downloading plugin from repository is not the end of the work. After installation, each plugin is making changes in configuration file. It’s typical race condition problem, lot of threads are trying to write to the same file resource at once. To fix it, just change exec function to execSync.

    scss

    ionic by default will add for you sass sytle sheets + gulp to compile them and output css to www folder.

    www

    in this folder all source files are placed, assets, images + libs. Third party libraries will be located in lib subfolder. Ionic is using bower for downloading thir party libs.

    platform

    this is like build folder.

    $ ionic platform add android
    

    With above command ionic will autogenerate android studio solution with all necessary files. It will copy all plugins, www folder with our code and wrap it with code that will host project as fullscreen webview controll. In the end its just Android studio project so we can open in it debug plguins code etc.

    Pain points of working with Ionic

    performance

    It’s in the end only javascript and html so it will never be such fast as native application, but major impact on performace in Ionic comes not from html or js. Guys from ionic decided to write as much as possible to provide the whole set of any type’s controls. But they didn’t care about performance… If you look on timeline in chrome devtool you could be really surprised how badly these controls are written. There is a lot of unnecessary timers, triggers. Even single button click in ionic triggers thousands of operations. Code is really crappy so if you do not want to use hundreds of mobile specific controls I would suggest you to not use ionic at all. You can simply achieve the same functionality using separately cordova + angular + some other controls library like mobile angular UI.

    leaking abstraction

    There is thousands of plugins available out there, but in the end you always needs something custom or tweak a bit already existing plugin. In this scenario you have to get your hands dirty and write some java code and objective-c code debug it in XCode or Android Stuido etc.

    build process

    Ionic comes with some build process, but it is not fully configurable and extendible. If you want to have more control about your assets pipeline, it is better to use gulp. In this scenario you have to introduce another folder which will contain your source code and then compile it minify and output to www folder for further cordova processing. Otherwise with custom Ionic build process you cannot minify your scripts. You cannot do nothing with bower components, so all garbage (tests, images, examples, docs) that bower downloads will be in your distribution package.

    developing process

    Specially when you are developing in IOS you must to be prepared for a lot of frustration. You are not able to develop on PC, you have to have MAC machine. It forces you to get familiar with crazy security policy provided by apple. You have to generate thousands of certificates, provisioning profiles and similar to publish your application, despite the fact that switching from mac to pc developing is also slow because long testing loops. You have to build package, deploy on device, attach chrome remote debugger etc and it’s time consuming.

    Good sides of hybrid application

    For some purposes it’s really good alternative for native application. When your application is rather typical, you don’t wanna fancy animation or some custom behavior, custom functionality – hybrid application is for you. So it depends on scenario. There is no silver bullet and for some scenarios when you are using it reasonable it’s good solution.

    My recomendations

    Don’t use ionic. You can start with ionic application just for test, to get familiar with hybrid application and how all of this works, but when you want to build production ready application just grab all needed pieces by yourself. Take cordova or phonegap, grap gulp or grunt, grab angular or react or whatever and build your own working tools set.

  • 0
  • 13

Hybrid mobile applications

  • How do you develop an algorithm?

    Little Johnny received two identical glass balls on his 10th birthday. Johnny was a well-known rascal and the first thing he wanted to do with his glass balls was to break them. He lives in a 100-story building, and he wanted to know from which floor will the balls break? In order to minimize the number of times he will need to drop the glass balls, Johnny needs an algorithm to determine from which floor they will break.

    The story above is one of many algorithm-oriented puzzles. How can one develop a good algorithm to solve the problem?

    “The Brute Force”

    The first algorithm that always comes to mind is generally the so-called Brute Force algorithm. Simply stated, we solve it the easiest way possible. In this case, Johnny will take one ball and start dropping it from the first floor, then the second, and so forth. He will eventually find the floor in which the ball will break. However with ease comes a great cost. We can easily see that the worst case for Johnny would be 100 drops if the balls breaks on the last floor. There must be a better way …

    “Divide and Conquer”

    What does the famous maxim of Philip II of Macedons have to do with algorithms? As we saw in the previous solution, we need to do N drops for N-story building. However, what we have is one spare ball, which we can use to cheaply gain some information. Let’s say we drop the ball on the 50th floor – when it doesn’t break we can drop it on the 75th then the 88th, 94th, 97th, 99th and finally 100th. As we see the worst case of previous solution is now only seven steps instead 100. Major progress? Not really. The worst case now is when the ball actually breaks on the 50th floor. Johnny then needs to go thru all the floors from first to 49th.

    “Divide and Conquer v2”

    What we’ve learned is that when the first ball breaks on the 50th floor, we then need to go through all of the floors below. Maybe we should split the 100 floors evenly and go one by one. By trial and error, we can try the following algorithm:

    Dropping balls at floors: 10, 20, 30, 40, 50, 60, 70, 80, 90, 100. When the glass balls break on 100th floor we have the worst case with 19 drops.

    “Dynamic algorithm”

    We can see one more flaw in the above algorithm. There is a big disproportion in total number of drops when the first ball breaks on the 10th in comparison to 90th floor. In the first case we would need only 10 drops total to know the floor, where the 90th case needs 19 drops. The ideal solution will somehow trade this difference to achieve similar number of drops.

    We need more insight into the problem. We should rephrase the problem as follows:

    Given the initial number of floors – N, can we find an algorithm that will identify the breaking floor in X drops? Let’s say we drop the first ball on X floor, if it breaks we need X-1 steps at max to find the breaking point. If it doesn’t break, we can assume that our current problem is to find an algorithm that will find a breaking floor in Y=X-1 drops in M=N-X story building. We have now a sub-problem, which after solved, will give us a the answer for the original problem. As we see, we can repeat this process until we will get 1-story building, for which we know that the necessary number of drops is 1. What we got is the dynamic algorithm which works on basis of sub-problem solution.

    Once again through trial and error, we discovered the optimal algorithm:
    14 27 39 50 60 69 77 84 90 95 99, which gives us 14 steps in the worst case scenario.

    General case

    What is the worst case number of drops if the building has N floors? I will leave this one to you!

  • 0
  • 31
  • How do you develop an algorithm?

    Little Johnny received two identical glass balls on his 10th birthday. Johnny was a well-known rascal and the first thing he wanted to do with his glass balls was to break them. He lives in a 100-story building, and he wanted to know from which floor will the balls break? In order to minimize the number of times he will need to drop the glass balls, Johnny needs an algorithm to determine from which floor they will break.

    The story above is one of many algorithm-oriented puzzles. How can one develop a good algorithm to solve the problem?

    “The Brute Force”

    The first algorithm that always comes to mind is generally the so-called Brute Force algorithm. Simply stated, we solve it the easiest way possible. In this case, Johnny will take one ball and start dropping it from the first floor, then the second, and so forth. He will eventually find the floor in which the ball will break. However with ease comes a great cost. We can easily see that the worst case for Johnny would be 100 drops if the balls breaks on the last floor. There must be a better way …

    “Divide and Conquer”

    What does the famous maxim of Philip II of Macedons have to do with algorithms? As we saw in the previous solution, we need to do N drops for N-story building. However, what we have is one spare ball, which we can use to cheaply gain some information. Let’s say we drop the ball on the 50th floor – when it doesn’t break we can drop it on the 75th then the 88th, 94th, 97th, 99th and finally 100th. As we see the worst case of previous solution is now only seven steps instead 100. Major progress? Not really. The worst case now is when the ball actually breaks on the 50th floor. Johnny then needs to go thru all the floors from first to 49th.

    “Divide and Conquer v2”

    What we’ve learned is that when the first ball breaks on the 50th floor, we then need to go through all of the floors below. Maybe we should split the 100 floors evenly and go one by one. By trial and error, we can try the following algorithm:

    Dropping balls at floors: 10, 20, 30, 40, 50, 60, 70, 80, 90, 100. When the glass balls break on 100th floor we have the worst case with 19 drops.

    “Dynamic algorithm”

    We can see one more flaw in the above algorithm. There is a big disproportion in total number of drops when the first ball breaks on the 10th in comparison to 90th floor. In the first case we would need only 10 drops total to know the floor, where the 90th case needs 19 drops. The ideal solution will somehow trade this difference to achieve similar number of drops.

    We need more insight into the problem. We should rephrase the problem as follows:

    Given the initial number of floors – N, can we find an algorithm that will identify the breaking floor in X drops? Let’s say we drop the first ball on X floor, if it breaks we need X-1 steps at max to find the breaking point. If it doesn’t break, we can assume that our current problem is to find an algorithm that will find a breaking floor in Y=X-1 drops in M=N-X story building. We have now a sub-problem, which after solved, will give us a the answer for the original problem. As we see, we can repeat this process until we will get 1-story building, for which we know that the necessary number of drops is 1. What we got is the dynamic algorithm which works on basis of sub-problem solution.

    Once again through trial and error, we discovered the optimal algorithm:
    14 27 39 50 60 69 77 84 90 95 99, which gives us 14 steps in the worst case scenario.

    General case

    What is the worst case number of drops if the building has N floors? I will leave this one to you!

  • 0
  • 31

How do you conceive an algorithm?