BBA 101 Principles of Management Unit 2 -Planning
Quantitative Techniques

1. Marginal Analysis

This technique is used in decision-making to figure out how much extra output will result if one more variable (e.g. raw material, machine, and worker) is added. In his book, ‘Economics’, Paul Samuelson defines marginal analysis as the extra output that will result by adding one extra unit of any input variable, other factors being held constant.

Marginal analysis is particularly useful for evaluating alternatives in the decision-making process.

 

2. Financial Analysis

This decision-making tool is used to estimate the profitability of an investment, to calculate the payback period (the period taken for the cash benefits to account for the original cost of an investment), and to analyze cash inflows and cash outflows.

 

3. Break-Even Analysis

This tool enables a decision-maker to evaluate the available alternatives based on price, fixed cost and variable cost per unit. Break-even analysis is a measure by which the level of sales necessary to cover all fixed costs can be determined.

Using this technique, the decision-maker can determine the break-even point for the company as a whole, or for any of its products. At the break-even point, total revenue equals total cost and the profit is nil.

 

4. Ratio Analysis

It is an accounting tool for interpreting accounting information. Ratios define the relationship between two variables. The basic financial ratios compare costs and revenue for a particular period. The purpose of conducting a ratio analysis is to interpret financial statements to determine the strengths and weaknesses of a firm, as well as its historical performance and current financial condition.

 

5. Operations Research Techniques

One of the most significant sets of tools available for decision-makers is operations research. An operation research (OR) involves the practical application of quantitative methods in the process of decision-making. When using these techniques, the decision-maker makes use of scientific, logical or mathematical means to achieve realistic solutions to problems. Several OR techniques have been developed over the years.

 

6. Linear Programming

Linear programming is a quantitative technique used in decision-making. It involves making an optimum allocation of scarce or limited resources of an organization to achieve a particular objective. The word ‘linear’ implies that the relationship among different variables is proportionate.

The term ‘programming’ implies developing a specific mathematical model to optimize outputs when the resources are scarce. In order to apply this technique, the situation must involve two or more activities competing for limited resources and all relationships in the situation must be linear.

Some of the areas of managerial decision-making where linear programming technique can be applied are:

i. Product mix decisions

ii. Determining the optimal scale of operations

iii. Inventory management problems

iv. Allocation of scarce resources under conditions of uncertain demand

v. Scheduling production facilities and maintenance.

 

7. Waiting-line Method

This is an operations research method that uses a mathematical technique for balancing services provided and waiting lines. Waiting lines (or queuing) occur whenever the demand for the service exceeds the service facilities.

Since a perfect balance between demand and supply cannot be achieved, either customers will have to wait for the service (excess demand) or there may be no customers for the organization to serve (excess supply).

When the queue is long and the customers have to wait for a long duration, they may get frustrated. This may cost the firm its customers. On the other hand, it may not be feasible for the firm to maintain facilities to provide quick service all the time since the cost of idle service facilities have to be borne by the company.

The firm, therefore, has to strike a balance between the two. The queuing technique helps to optimize customer service on the basis of quantitative criteria. However, it only provides vital information for decision-making and does not by itself solve the problem. Developing queuing models often requires advanced mathematical and statistical knowledge.

 

8. Game Theory

This is a systematic and sophisticated technique that enables competitors to select rational strategies for attainment of goals. Game theory provides many useful insights into situations involving competition. This decision-making technique involves selecting the best strategy, taking into consideration one’s own actions and those of one’s competitors.

The primary aim of game theory is to develop rational criteria for selecting a strategy. It is based on the assumption that every player (a competitor) in the game (decision situation) is perfectly rational and seeks to win the game.

In other words, the theory assumes that the opponent will carefully consider what the decision-maker may do before he selects his own strategy.

Minimizing the maximum loss (minimax) and maximizing the minimum gain (maximin) are the two concepts used in game theory.

 

9. Simulation

This technique involves building a model that represents a real or an existing system. Simulation is useful for solving complex problems that cannot be readily solved by other techniques. In recent years, computers have been used extensively for simulation. The different variables and their interrelationships are put into the model.

When the model is programmed through the computer, a set of outputs is obtained. Simulation techniques are useful in evaluating various alternatives and selecting the best one. Simulation can be used to develop price strategies, distribution strategies, determining resource allocation, logistics, etc.

 

10. Decision Tree

This is an interesting technique used for analysis of a decision. A decision tree is a sophisticated mathematical tool that enables a decision-maker to consider various alternative courses of action and select the best alternative. A decision tree is a graphical representation of alternative courses of action and the possible outcomes and risks associated with each action.

In this technique, the decision-maker traces the optimum path through the tree diagram. In the tree diagram the base, known as the ‘decision point,’ is represented by a square. Two or more chance events follow from the decision point. A chance event is represented by a circle and constitutes a branch of the decision tree. Every chance event produces two or more possible outcomes leading to subsequent decision points.

The decision tree can be illustrated with an example. If a firm expects an increase in the demand for its products, it can consider two alternative courses of action to meet the increased demand:

(a) Installing new machines,

(b) Introducing a double shift.

There are two possibilities for each alternative, i.e. output may increase

(positive state) or fall (negative state). The probabilities associated with each state are taken as 0.6 and 0.4 respectively. This information can be presented in a tabular form, known as a pay-off matrix

Additional machines

= (Rs. 3,00,000 × 0.6) + (Rs. 2,00,000 × 0.4)

= Rs. 2,60,000

 

Double shift

= (Rs. 2,80,000 × 0.6) + (Rs. 2,40,000 × 0.4)

= Rs. 2,64,000

Since the pay-off from introducing a double shift is higher, it may be selected. Though, the decision tree does not provide a solution to the decision-maker, it helps in decision-making by showing the alternatives available and their probabilities.

The decision tree allows the decision-maker to see the application of most of the steps in the decision-making process in one single diagram. The effectiveness of this decision-making technique depends on the assumptions and the probability estimates made by the decision-maker.

Advantages of the Decision Tree

● It can be used for both classification and regression problems: Decision trees can be used to predict both continuous and discrete values i.e. they work well in both regression and classification tasks.

● As decision trees are simple hence they require less effort for understanding an algorithm.

● It can capture nonlinear relationships: They can be used to classify non-linearly separable data.

● An advantage of the decision tree algorithm is that it does not require any transformation of the features if we are dealing with non-linear data because decision trees do not take multiple weighted combinations into account simultaneously.

● Easy to understand, interpret, visualize.

● The data type of decision tree can handle any type of data whether it is numerical or categorical, or boolean.

● Normalization is not required in the Decision Tree.

● The decision tree is one of the machine learning algorithms where we don’t worry about its feature scaling. Another one is random forests. Those algorithms are scale-invariant.

● It gives us and a good idea about the relative importance of attributes.

● Useful in data exploration: A decision tree is one of the fastest way to identify the most significant variables and relations between two or more variables. Decision trees have better power by which we can create new variables/features for the result variable.

● Less data preparation needed: In the decision tree, there is no effect by the outsider or missing data in the node of the tree, that’s why the decision tree requires fewer data.

● Decision tree is non-parametric: Non-Parametric method is defined as the method in which there are no assumptions about the spatial distribution and the classifier structure.

 

Disadvantages

● Concerning the decision tree split for numerical variables millions of records: The time complexity right for operating this operation is very huge keep on increasing as the number of records gets increased decision tree with to numerical variables
takes a lot of time for training.
● Similarly, this happens in techniques like random forests, XGBoost.
● Decision tree for many features: Take more time for training-time complexity to increase as the input increases.
● Growing with the tree from the training set: Overfit pruning (pre, post), ensemble method random forest.
● Method of overfitting: If we discuss overfitting, it is one of the most difficult methods for decision tree models. The overfitting problem can be solved by
setting constraints on the parameters model and pruning method.
● As you know, a decision tree generally needs overfitting of data. In the overfitting problem, there is a very high variance in output which leads to many errors in the final estimation and can show highly inaccuracy in the output. Achieve zero bias (overfitting), which leads to high variance.
● Reusability in decision trees: In a decision tree there are small variations in the data that might output in a complex different tree is generated. This is known as variance in the decision tree, which can be decreased by some methods like bagging and boosting.

● It can’t be used in big data: If the size of data is too big, then one single tree may grow a lot of nodes which might result in complexity and leads to overfitting.

● There is no guarantee to return the 100% efficient decision tree.

If you want to overcome the limitations of the decision tree, then you should use the random forest method, because it does not depend on a single tree. It creates a forest with multiple trees and takes the decision based on the number of majority of votes.