Apr 5, 2020

Enjoy implementing calculator with stack and recursive call




This is a picture of unique shoji :)

1. Introduction

I enjoyed implementing a calculator. It can calculate as follows :)

1 + 2                       Answer: 3
1 + 2 x (10 + 1)        Answer: 23
(1 + 1) x (1 + 2)       Answer: 6

2. Program

2.1. Input

As for a program input, I supposed like the following letter string.

[Input]
1+2
1+1*(10+0)
(1+1)*(1+2)

2.2. Output

As for a program output, it's simply its calculation result.
In the above cases, the output is as follows.

[Output]
3
11
6


2.3. Implementation

I can simply implement by using a stack and a recursive call as follows.
Thanks to the recursive call, we can calculate the text within parentheses :)

Main.java
package org.inut;

import java.util.*;

class Main { 
    public void pushCalcStack(int currNum, char operator, Deque<Integer> calcStack) {
        if (operator == '+') {
            calcStack.push(currNum);
        } else if (operator == '-') {
            calcStack.push(-currNum);
        } else if (operator == '*') {
        int prevNum = calcStack.pop();
            calcStack.push(prevNum * currNum);
        } else if (operator == '/') {
        int prevNum = calcStack.pop();
        calcStack.push(prevNum / currNum);
        }
    }
 
    public int simpleCalc(String str) {
        Deque<Integer> calcStack = new ArrayDeque();
        int calcResult = 0;
        char operator = '+';
        int currNum = 0;
        int currIndex = 0;
        boolean isParentheses = false;
        int startIndex = 0;
     
        for (char letter : str.toCharArray()) {
            if (Character.isDigit(letter)) {
                currNum = 10 * currNum + letter - '0';
            } else {
                if (letter == '(') {
                    isParentheses = true;
                    startIndex = currIndex + 1;
                }
                if (letter == ')') {
                    isParentheses = false;
                    currNum = simpleCalc(s.substring(startIndex, currIndex));
                }
                if (!isParentheses) {
                    pushCalcStack(currNum, operator, calcStack);
                    operator = letter;
                    currNum = 0;
                }
            }
            currIndex++;
        }
     
        if (currNum != 0)
        pushCalcStack(currNum, operator, calcStack);
     
        for (int num : calcStack)
            calcResult += num;
     
        return  calcResult;
    }
 
    public static void main(String[] args) {
        String input = "2*(1+2)/2-1*(5*3)";
     
        int calcResult = new Main().simpleCalc(input);
        System.out.println(calcResult);
    }
}



3. Extension

If we want to use other parentheses such as '{' and '[', we can do that by using a recursive call and modifying some conditions :)


Mar 29, 2020

Vertexes on the contour formed by n rectangles




This is Manila. I met my best friends :)

1. Introduction

In this article, I consider a simple program outputting vertexes on the contour formed by n rectangles.

The locations and height of n rectangles are given like Figure 1, 2, 3.
It is guaranteed that 2 ≤ n ≤ Max of Integers.


I assumed the following as for vertex information.

[Assumption]
1. I suppose that the vertex information of each rectangles is represented by array of integers [lx, rx, ly, ry], [lx', rx', ly', ry'], ... .

2. It is guaranteed that 0 ≤ lx < rx ≤ Max of Integer and 0 ≤ ly < hy ≤ Max of Integer.

2. Program

2.1. Input

And as for a program input, I supposed the following list.

[Input]
{(lx, rx, ly, hy), (lx', rx', Lly', hy'), ...}

2.2. Output

As for a program output, I suppose to output the list of vertexes on the contour formed by these rectangles like Figure 1', 2', 3'.



[Output]
Figure1: {(lx,ly), (rx,ly), (rx,ly'), (rx',ly'), (lx,hy), (lx',hy), (lx',hy'), (rx',hy')}
Figure2: {(lx,ly), (lx',ly), (lx',ly'), (rx',ly'), (lx,hy), (rx,hy), (rx,hy'), (rx',hy')}
Figure3: {(lx,ly), (rx',ly'), (lx,ly), (rx',ly')}

2.3. Implementation

For example, I can simply implement as follows.

Main.java
package org.inut;

import java.util.*;

class Main {
  int lx = 0;
  int rx = 0;
  int ly = 0;
  int hy = 0;

  public List<List<Integer>> seekVertexes(List<List<Integer>> input, List<List<Integer>> outputLowerSide, List<List<Integer>> outputUpperSide) {
    for (List<Integer> list : input) {
      int newLx = list.get(0);
      int newRx = list.get(1);
      int newLy = list.get(2);
      int newHy = list.get(3);

      // seek vertexes on the lower side
      if (ly != newLy) {
        if (outputLowerSide.isEmpty()) {
          outputLowerSide.add(new ArrayList<Integer>() {{add(newLx); add(newLy); }});
          lx = newLx;
          ly = newLy;
        } else {
          if (ly < newLy) {
            outputLowerSide.add(new ArrayList<Integer>() {{ add(rx); add(ly); }});
            outputLowerSide.add(new ArrayList<Integer>() {{ add(rx); add(newLy); }});
          } else if (ly > newLy) {
            outputLowerSide.add(new ArrayList<Integer>() {{ add(newLx); add(ly); }});
            outputLowerSide.add(new ArrayList<Integer>() {{ add(newLx); add(newLy); }});
          }

          lx = newLx;
          ly = newLy;
        }
      }

      // seek vertexes on the upper side
      if (hy != newHy) {
        if (outputUpperSide.isEmpty()) {
          outputUpperSide.add(new ArrayList<Integer>() {{add(newLx); add(newHy); }});
          lx = newLx;
          hy = newHy;
        } else {
           if (hy > newHy) {
             outputUpperSide.add(new ArrayList<Integer>() {{ add(rx); add(hy); }});
             outputUpperSide.add(new ArrayList<Integer>() {{ add(rx); add(newHy); }});
           } else if (hy < newHy) {
             outputUpperSide.add(new ArrayList<Integer>() {{ add(newLx); add(hy); }});
             outputUpperSide.add(new ArrayList<Integer>() {{ add(newLx); add(newHy); }});
           }

            lx = newLx;
            hy = newHy;
        }
      }

      rx = newRx;
    }

    outputLowerSide.add(new ArrayList<Integer>() {{add(rx); add(ly); }});
    outputUpperSide.add(new ArrayList<Integer>() {{add(rx); add(hy); }});

    List<List<Integer>> result = new ArrayList<>();
    result.addAll(outputLowerSide);
    result.addAll(outputUpperSide);

    return result;

  }

  public static void main(String args[]) {
    List<List<Integer>> input = new ArrayList<>();
    input.add(Arrays.asList(1,3,1,3));
    input.add(Arrays.asList(2,5,2,5));
    input.add(Arrays.asList(4,7,1,4));

    List<List<Integer>> outputLowerSide = new ArrayList<>();
    List<List<Integer>> outputUpperSide = new ArrayList<>();
    List<List<Integer>> result = new Main().seekVertexes(input, outputLowerSide, outputUpperSide);

    for (List<Integer> elm : result) {
      System.out.println(elm);
    }
  }
}



3. Sort and Performance

This program will work well.
However, if sort is needed and n is large number, I want to come to use some sort algorithm to get the good performance. For example, it's a merge sort algorithm :)

Jan 15, 2019

Completeness, Optimality, and Time and Space Complexity of BFS




1. Introduction

There are some basic search algorithms. Breadth-first search (BFS) is one of them and  very simple. I note its the completeness, optimality, and time and space complexity :)

The photo is a nostalgic Japanese toy store :)


2. Algorithm

BFS does a series of processes per depth. The processes are to find child nodes and visit them. When a node is visited, the child nodes of it are found. These child nodes are often inserted into a FIFO queue. And also, when the child node is visited, it is removed from the queue.




When we assume there are a tree like this figure and we want to find 'Cake', a series of processes are as follows.

1. When node1 is visited, node2 and node3 are found and enqueued.
2. After that, node2 is visited and dequeued. And also, node4 and node5 are found and enqueued.
3. After that, node3 is visited and dequeued. And also, node6 is found and enqueued.
4. After that, node4 is visited and dequeued.
5. After that, node5 is visited and dequeued. Node5 has 'Cake' and is a goal node, so the search task is completed :)


3. Characteristic

3.1. Completeness and Optimality

BFS is complete and optimal, beacuse it has capacity to visit all nodes and find the shallowest path. So, BFS is a good algorithm in the lights of its completeness and optimality.


3.2. Time and Space Complexity

But, in terms of its time and space complexity, BFS is not good algorithm. It is because both of them exponentially increases in association with the increase of the depth of a goal node and balancing factor.

If we define as follows, the time and space complexcity are O(b^x).

depth of a goal node: x
branching factor: b


Jun 12, 2017

Stabilization of Successor and Predecessor in Chord




1. Introduction

Chord is one of the most famous algorithms for a distributed hash table. In this article, I note the stabilization of Chord about successors and predecessors. Understanding stabilizations like Chord's one is important for understanding cloud computing :)
I don't note the stabilization of fingertable in this time and will note it in the other article :)


2. Stabilization of Successor and Predecessor

2.1. Logic of Stailization

The logic of the stabilization of successors and predecessors is as follows. In this figure, ID is the hash value of the node's IP address and port. The distance of nodes is the difference in IDs. So, the distance between 'Node A' and 'Node B' is as follows.



・Distance between 'Node A' and 'Node B'
ID of 'Node B' - ID of 'Node A'

(ID: the hash value of the node's IP address and port) 


2.2. Specific Example of Stabilization

Let's assume that the following ring in which blue nodes participate exists and has a pink node participate in the ring.
The successor of 'Node A' is incorrect, the predecessor of 'Node AA' is not assigned and the predecessor of 'Node B' is incorrect.



'Node AA' asks wheter the predecessor of his current successor ('Node B') is himself ('Node AA').
Because the predecessor of 'Node B' is not 'Node AA', 'Node AA' askes 'Node C' wheter he is closer to 'Node B' than 'Node A' is. Node AA' is closer to 'Node B' than 'Node A', so the predecessor of 'Node B' is revised to be 'Node AA'.

Node A' asks whether the predecessor of his current successor ('Node B') is himself ('Node A').
Because the node that is closer to 'Node B' than himself is 'Node AA', 'Node A' change his successor to 'Node AA'. And then, 'Node A' teach 'Node AA' that the predecessor of 'Node AA' is himself. 'Node AA' set his predecessor to 'Node A'.

The ring is as follows at this time. All successors and predecessors of nodes are properly set :)



3. Conclusion

The stabilization of successors and predecessors of nodes is based on a simple logic noted in 2.1. Even if nodes are added or removed, the stabilization revises these successors and predecessors and ring is properly maintained.

Apr 23, 2017

Two Styles of Continue and Break in Scala




1. Introduction

In this article, I note how to write 'continue' and 'break' by using Scala in two styles. One is a classic Java style and the other is a elegant style like Japanese calligraphy drawn unicursally. I loved the latter style :)


2. Continue and Break in Scala

2.1. Continue

First, I note the 'continue' in Scala. Test data is the simple Map object.

・ test data
val data = Map[String, Double]( "libor1M" -> 0.1, "libor3M" -> 0.2, "tibor1M" -> 0.3, "libor6M" -> 0.4)


If you write a progam by using Scala in classic Java stile, it may be the following. It's so redundant and the readability is low and not beautiful.

・ 'continue' in Scala (classic Java style)
val exitBlock = new Breaks()
import exitBlock.{breakable, break}

for (one <- data) {
  breakable {
    if (!one._1.startsWith("libor")) {
      break
    }
    println(one._1 + ":" + one._2)
  }
}


But, you can write so simple and elegant code as follows, using Scala. Compared to the above classic Java style, the amount of code is decreased. It's like the beautiful Japanese calligraphy drawn unicursally and so readable :)

・ 'continue' in Scala (elegant style :))
data.withFilter(p => p._1.startsWith("libor")).foreach(f => println(f._1 + ":" + f._2))


The full source code is as follows.

・ full source code
import scala.util.control.Breaks

object test {
  def main(args: Array[String])={
    val data = Map[String, Double]( "libor1M" -> 0.1, "libor3M" -> 0.2,
                                                             "tibor1M" -> 0.3, "libor6M" -> 0.4)
                                       
   
    // I don't like the following code. It's not elegant ...
    val exitBlock = new Breaks()
    import exitBlock.{breakable, break}

    for (one <- data) {
      breakable {
        if (!one._1.startsWith("libor")) {
          break
        }
        println(one._1 + ":" + one._2)
      }
    }
   
   
    // I like the following code :)
    data.withFilter(p => p._1.startsWith("libor")).foreach(f => println(f._1 + ":" + f._2))
  }
}


2.2. Break

Next, I note the 'break' in Scala. Test data is the simple List object.

・ test data
val data = List[String]("libor1M", "libor3M", "tibor1M", "libor6M")


If you write a progam by using Scala in classic Java style, it may the following redundant one in common with 'continue'.

・ 'break' in Scala (classic Java style)
val exitBlock = new Breaks()
import exitBlock.{breakable, break}

breakable {
  for (one <- data) {
    if (one == "tibor1M") {
      println(one)
      break
    }
  }
}

But, as with the case of 'continue', you can write simple and elegant code. It's so readable :)

・ 'break' in Scala (elegant style :))
val result = data.find(p => p == "tibor1M").toString()
println(result)


The full source code is as follows.

・full source code
import scala.util.control.Breaks

object test {
  def main(args: Array[String])={  
    val data = List[String]("libor1M", "libor3M", "tibor1M", "libor6M")
   
   
    // I don't like the following code. It's not elegant ...
    val exitBlock = new Breaks()
    import exitBlock.{breakable, break}


    breakable {
      for (one <- data) {
        if (one == "tibor1M") {
          println(one)
          break
        }
      }
    }
   
   
    // I like the following code :)
    val result = data.find(p => p == "tibor1M").toString()
    println(result)

  }
}



3. Conclusion

In Scala, you can write 'continue' and 'break' in the classic Java Style and the beautiful style like Japanese calligraphy drawn unicursally. Writing in the latter style will reduce the amount of code and make the readability so high.  


Mar 13, 2017

Fun Quiz on SparkR: How to Create Data Frame without Spark-CSV Package :)





1. Introduction

We are able to write Spark applications that create directly a data frame from a csv file by using the spark-csv package. And so, the package is often used.

Mahler's Symphony No. 5 will be performed by Eliahu Inbal and Konzerthausorchester Berlin in Tokyo tonight. I arrived at the hall an hour before that the classical concert started. So, I considered a few interesting quiz about creating data frames from csv files without above package :)

In this article, Spark's version is 1.6.3.

2. Fun Quiz

In this chapter I note the source code using the spark-csv package and the few funny quiz :)
Before starting the quiz game, we need to prepare the following csv file.

・ test_data.csv
20170228,scheme1,BermudanCallableSwap,JPY,21261339422
20170228,scheme2,BermudanCallableSwap,JPY,22759109989
20170228,scheme3,BermudanCallableSwap,JPY,21405741891
...


2.1. Use Spark-CSV Package

First, I note the program using the spark-csv package. There are two important points. One is setting 'SPARKR_SUBMIT_ARGS' to use spark-csv package, and the other is to load a csv file as data frame directly. 

How to set 'SPARKR_SUBMIT_ARGS' and load a csv file as data frame are as follows.

・ set 'SPARKR_SUBMIT_ARGS' (com.databricks:spark-csv)
Sys.setenv('SPARKR_SUBMIT_ARGS'='"--packages" "com.databricks:spark-csv_2.11:1.5.0" "sparkr-shell"')

・ load a csv file as data frame
df <- read.df(sqlContext, inputFilePath, source = "com.databricks.spark.csv", schema = schema)

The full source code is as follows.

・ source code
library(SparkR, lib.loc = c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib")))

Sys.setenv('SPARKR_SUBMIT_ARGS'='"--packages" "com.databricks:spark-csv_2.11:1.5.0" "sparkr-shell"')

# init SparkR
sc <- sparkR.init(appName = "visualization", master = "local[*]")
sqlContext <- sparkRSQL.init(sc)

# parameter
inputFilePath <- "C:/dev/R/test_data.csv"

# schema of data frame
schema <- structType(structField(
                     structField(x = "date", type = "integer", nullable = FALSE),
                     structField(x = "scheme_number", type = "string", nullable = FALSE),
                     structField(x = "product", type = "string", nullable = FALSE),
                     structField(x = "currency", type = "string", nullable = FALSE),
                     structField(x = "sensitivity", type = "double", nullable = FALSE))

# load a csv file as data frame
df <- read.df(sqlContext, inputFilePath, source = "com.databricks.spark.csv", schema = schema)


showDF(df)

2.2. Quiz

Next, I note a simple quiz. Enjoy!! :)

・ quiz
If you don't use the spark-csv package, how do you create the data frame with the folllowing schema from test_data.csv?

・ schema
  date,scheme_number,product,currency,sensitivity

・ test_data.csv
  20170228,scheme1,BermudanCallableSwap,JPY,21261339422
  20170228,scheme2,BermudanCallableSwap,JPY,22759109989
  20170228,scheme3,BermudanCallableSwap,JPY,21405741891
  ...

The time limit is 5 minutes :)


3. Answer

The answer is as follows. It's diffuse, but there is a bit of fun :)
You must prepare a record splitter.

・ record splitter
splitRecord <- function(record) {
  Sys.setlocale("LC_ALL", "C")
  part <- strsplit(record, ",")[[1]]
  list(column1 = as.integer(part[1]),
       column2 = part[2],
       column3 = part[3],
       column4 = part[4],
       column5 = as.double(part[5]))
}

And then, you must make a rdd, split it and convert to a data frame.

・ load a csv file and create data frame
# load a csv file
orgData <- SparkR:::textFile(sc, inputFilePath)

# split data
splitData <- SparkR:::lapply(orgData, splitRecord)

# create data frame
df <- SparkR:::createDataFrame(sqlContext, splitData, schema)

The full source code is as follows :)

・ source code
library(SparkR, lib.loc = c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib")))

# init SparkR
sc <- sparkR.init(appName = "visualization", master = "local[*]")
sqlContext <- sparkRSQL.init(sc)

# parameter
inputFilePath <- "C:/dev/R/test_data.csv"

# schema of data frame
schema <- structType(structField(
                     structField(x = "date", type = "integer", nullable = FALSE),
                     structField(x = "scheme_number", type = "string", nullable = FALSE),
                     structField(x = "product", type = "string", nullable = FALSE),
                     structField(x = "currency", type = "string", nullable = FALSE),
                     structField(x = "sensitivity", type = "double", nullable = FALSE))

# record splitter
splitRecord <- function(record) {
  Sys.setlocale("LC_ALL", "C")
  part <- strsplit(record, ",")[[1]]
  list(column1 = as.integer(part[1]),
       column2 = part[2],
       column3 = part[3],
       column4 = part[4],
       column5 = as.double(part[5]))
}


# load a csv file
orgData <- SparkR:::textFile(sc, inputFilePath)


# split data
splitData <- SparkR:::lapply(orgData, splitRecord)


# create data frame
df <- SparkR:::createDataFrame(sqlContext, splitData, schema)


showDF(df)


4. Conclusion

If you enjoy this quiz, I  will be pleased :) The concert will start soon!


・ Gustav Mahler    Symphony No. 5
・ Richard Wagner  Tristan and Isolde: Prelude and Love Death
・ Konzerthausorchester Berlin, cond. Eliahu Inbal
・ Tokyo



Feb 18, 2017

SparkR with Visual Studio and RStudio





1. Introduction

I usually see many lovers of R in financial investment banks. If they have the capability of processing  'Big Data' by R that they loves, they may feel so happy :) SparkR makes it possible. In this article, I note how to enjoy SparkR.

・SparkR
 https://github.com/apache/spark/tree/master/R

SparkR is included in Apache Spark. I use RStudio and Visual Studio  as  IDE. The versions of Apache Spark, R, Visual Studio and RStudio are as follows.

Apache Spark 1.6.3
R 3.3.2
RStudio 1.0.136
Visual Studio Community 2015 Update 3 (Update 3 or later are necessary)

 

2. SparkR with IDE

I deployed  Spark 1.6.3 as follows and set SPARK_HOME to C:\enjyoyspace\spark-1.6.3. RStudio is often used in Executing SparkR as IDE. We can also use Visual Studio and there are many lovers of Visual Studio, So, I note how to execute SparkR with RStudio and Visual Studio.

C:\enjyoyspace\spark-1.6.3
    ├─bin
    ├─conf
    ├─data
    ├─ec2
    ├─examples
    ├─lib
    ├─licenses
    ├─python
    ├─R


2.1. Use RStudio

SparkR is loaded as follows. SparkR's APIs are so easy and kind for the lovers of R language. So, learning costs may be very low for them :)

library(SparkR, lib.loc = c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib")))


The full source code is as follows :)

# load SparkR
library(SparkR, lib.loc = c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib")))

sc <- sparkR.init(appName = "CalcProhitAndLoss")
sqlContext <- sparkRSQL.init(sc)

# create data frame (R)
shockedPV <- data.frame(date = c("20161001", "20161101", "20161224"), PV = c(10000000000, 10000001000, 10000002000))
nonShockedPV <- data.frame(date_nonShocked = c("20161001", "20161101", "20161224"), PV_nonShocked = c(9000000000, 9000001000, 9000002000))

# create data frame (SparkR) from data frame (R)
shockedPVforSparkR <- createDataFrame(sqlContext,shockedPV)
nonShockedPVforSparkR <- createDataFrame(sqlContext,nonShockedPV)

# join of RDDs
masterPV <- join(shockedPVforSparkR, nonShockedPVforSparkR, shockedPVforSparkR$date == nonShockedPVforSparkR$date_nonShocked)

# register table
registerTempTable(masterPV, "masterPV")

# SparkSQL
prohitAndLossForSparkR <- sql(sqlContext, "SELECT date, PV-PV_nonShocked AS prohit_and_loss FROM masterPV")

# collect query results
prohitAndLoss <- collect(prohitAndLossForSparkR)

# display collected results
print(prohitAndLoss)


2.2. Use Visual Studio Community

In the execution of SparkR with Visual Studio, it's necessary to install the following plugin.  The version of Visual Studio must be '2015 Update 3 or later'.

・R Tools for Visual Studio
 https://microsoft.github.io/RTVS-docs/

We are able to use the script described in 2.1. There is nothing to change :)


3. Execution Result

The execution result is as follows. The calculation results are noramlly computed :)

      date prohit_and_loss
1 20161224           1e+09
2 20161001           1e+09
3 20161101           1e+09


4. Conclusion

We can use an attractive tool 'SparkR' with Visual Studio or RStudio. If you use Visual Studio, the version of Visual Studio is '2015 Update 3 or later'.