60. Management Accountant Profession in the Spotlight - The Last Song
How many times I hear of CFOs accepting the proposal of a Costing software without interacting with a skilled Management Accountant!!
The attention is focused just on the speed of reporting and the easiness of the respective procedures together with the number of reports.
No one considering the fact the appropriatness of those reports may be lacking!!!
Since this time is not the first, but it will be the last one, I try to exhamine this root cause for every kind of judjement error, I will make a shortlist of the factors to be taken into account when choosing a costing system that even the best software vendor cannot "impose"
a) Job Costing, Process Costing, Operation Costing.
b) Standard, Normal, Actual.
c) Volume-Based, Activity-Based.
d) Business Rate, Department Rate.
e) Theoretical, Practical, Normal, Budgeted Capacity-Based method.
f) Empirical, Heuristic, Mixed leading approach
Having seen that, a short focus on every of these elements is needed and in the end it should be clear enough how the choice of a costing system should be shared with the "best" of the business management accountants that combines skills with technical knowledge and familiarity with business processes.
a) Lets'get started with the cost gathering method that leads to the distinction among Job Costing, Process Costing, Operation Costing.
Let's suppose the Outputs of your processes are very distinguishable from each other in a sense that they can vary in value and features according to the kind and the amount of resources employed for the make of the product or the delivery of the service.
In this case all the costs incurred (Direct materials, Direct Labor and Overhead Costs) may be assigned to a specific product/service, order, batche of products and project.
In other words Job Costing is the cost gathering system to be used.
Instead, if you work at a mass production company whose products are very similar and homogeneous so that they requires similar processes and consumptions of resources, the most appropriate system is without a doubt the Process Costing that first of all "pays attention" to the costs incurred in the processes into the departments and at a second step determines the product/service unit cost.
It is not over!
When just one category of costs, usually the direct materials, may be varying from one output unit to another whilst the others are similar in type and amount, then we may use the Operation Costing.
What I just described would suffice to make it clear how the Management Accountant should be involved in making such a decision but for the most skeptical I want to make my dissertation go all the way!!
b) The kind of measurement impacts also the choice of the costing system and leads to the distinction of Actual, Standard and Normal Costing.
The first provides the costing of the specific objects by taking into account the real costs incurred throughout the reference period (Overheads are generally known only at its end).
When some sort of urgency is present in the information needs of the firm, just as it happens in many cases for the pricing, that moreover cannot be subject to recurrent monthly fluctuations due to the change in the Unit Fixed Costs caused by the variation in Monthly Output volume, the management can opt for the Normal Costing that attributes a predetermined Overhead amount to each product unit based on a predetermined rate while assigning the actual amount of the other cost elements (direct materials in primis).
This approach is very appropriate of when the strategy is based on the Cost leadership in a very dynamic and competitive environment where the costs of some elements (direct materials) are fluctuating and at the same time the timeliness of the pricing is a critical succes factor.
We go through this distinction by citing the Standard Costing adopted when first of all a sort of homogeneousity is present in all the bisiness processes over time, due for instance to a low customization, and it's possible standardize the product/service costs in all the elements considered without waiting for the costing at end of the accounting period to have a reliable determination.
c) The Overhead Costs are attributed to the cost objects by taking into account the most reliable driver whose change in the quantity employed determines a specific variation in the Overhead amount.
When this driver expresses the amount of the Output (usually Labor Hours, Machine Hours) the method is called Volume-Based; when it is the result from each of the different "indirect" activities carried out per cost object (product, service, customer...), we have an Activity-Based approach.
It goes without saying that when the business processes are homogeneous, that many times means to have similar products, kinds of customers, projects, the Volume-Based approach fits well that work environment; in the opposite case when the indirect activities and the resources needed to carry them out differ in nature since the products/services/projects/customers are rather different, the Activity-Based approach is the best possible method.
The literature about ABC is rich and I won't focus on its operating way since the goal of this article is highlighting the general skills of a management accountant and the need for his partecipation in some decision making as a partner and not just as an executor.
d) Another important issue concerning the similarity or the difference of the processes calls for the comparison amongst the work done in the business deparments that should bring about a diverse way to determine the Overhead Allocation Rate when using the Normal Costing or also the Standard Costing at the time of the Overhead Variance Analysis.
In order to be more clear, here is an example with reference to a Normal Costing System based on a Volume-Based approach.
We may apply two methods to allocate the Overhead Costs to a Product in case of multiple departments inside a firm:
1) General Allocation
Total Budget Overheads = $ 8,000,000
Budget Overheads Dpt A: $ 2,700,000
Budget Overheads Dpt B: $ 2,400,000
Budget Overheads Dpt C: $ 2,900,000
Total Budget Labor Hours (allocation driver): 192,000
Budget Labour Hours for Dpt A: 72,000
Budget Labour Hours for Dpt B: 55,000
Budget Labour Hours for Dpt C: 65,000
Hourly Overhead rate: 8,000,000/192,000 = $ 41,66
Let's suppose that the Actual Labor Hours for the Product A are the following:
Total Actual Labour Hours: 2,200
Actual Labor Hours on Prod. A for Dpt A: 1,000
Actual Labor Hours on Prord. A for Dpt B: 700
Actual Labor Hours on Prod. A for Dpt C: 500
Overheads Allocated to Prod. A = 41,666 X 2,200 = $ 91,665
1) Specific Allocation/Single Departments
Here is the second Overhead allocation way to Product A by taking into account the Input values above written.
Hourly Overhead rate per Dpt
Hourly Overhead rate Dpt A: $2,700,000/72,000 = $ 37.5
Hourly Overhead rate Dpt B: $ 2,400,000/55,000 = $ 43.64
Hourly Overhead rate Dpt C: $ 2,900,000/ 65,000 = $ 44.61
Overheads Allocated to Prod. A from each Dpt
From Dpt A = $ 37.5 x Actual LB Hrs Dpt A = $ 37.5 X 1,000 = $ 37,500
From Dpt B = $ 37.5 x Actual LB Hrs Dpt B = $ 43.64 X 700 = $ 30,548
From Dpt C = $ 37.5 x Actual LB Hrs Dpt C = $ 44.61 X 500 = $ 22,305
Overheads Allocated to Prod. A = $ 90,353
As we can see there is a difference being worth $ 1,312.
In this case we might judge it as not material and prefer to apply the General Allocation method more easy and immediate to achieve but when dpt processes are so different so that may absorb different resource quantities and kind of resources (in some cases managers refer to different cost drivers), the most natural choice is to apply different dpt rates in order to have the product costing as accurate as possible.
e) Another issue is the choice of the basis to which spread the Fixed Overheads in order to achieve the respective Fixed Overhead Allocation Rate to apply to each object and getting its costing.
As we saw in the articles dedicated to the Output Capacity of a business unit, the solutions are four:
Theoretical, Practical, Normal and Budgeted Capacity.
The definitions of these kinds are well-known and I won't repeat them here, inviting you to read the above mentioned articles on this webpage.
It's clear that the Unit Cost resulting from this choice decreases when shifting from Theoretical up to Budgeted or Normal Capacity (usually Average Budgeted Capacity over 3 years).
The consequences from it impact the Product-pricing decisions as well as the performance evaluation aspects.
f) The last aspect I want to take into account is the leading approach that causes the business managers to make a decision about a favorite costing system.
I am talking about the importance level attributed to the empirical research (if known) that is the findings of the researches made on the accuracy of a costing system, by taking advantage also out of statistical techniques.
If we choose to adopt an empirical approach for instance, we will find that the researches show that Volume-Based might work when resource costs are concentrated and that ABC systems are likely to have the greatest benefit when resource costs are diversified.
Another finding is that when forming pools of costs (each pool with one cost driver different from the other pools) in order to achieve an Overhead Allocation Rate, a low number of cost pools seems to be reasonable because ther is an acceptable tradeoff between the costs of adding more pools and benefits of system accuracy.
Other findings have been made but I won't list them in accordance with the purpose of this article I repeat once more: highlighting how the strategic partecipation of a skilled management accountant to the choice and to the settings of a costing system is "compulsory" and not only advisable.
59. Digital Assets and Customer Data: How to Evaluate Them as Fixed Assets
May 5th 2022
It goes without saying that data is the most important value source nowadays and that it is good for the vast majority of industries.
Without a doubt it will be from now on even more the catalyst for the future economic growth, which all business policies and in particular the marketing ones are going to rely on to increase the bottom line.
That means that data is to be considered like an Asset that produces value through its use and reuse over time and is not consumed immediately like a Raw Material unit
What do I mean by use and reuse in practical terms?
Please think of how a business handles its data:
It not only ingests, stores, maintains but also secures the data.
Even more its dedicated specialists acquire, aggregate, prepare, clean, structure and (why not) enrich, transform and map raw data in order to achieve a suitable format for a better decision making.
In the end, data is made usable to the downstream data consumers within the same business.
The particularity is these processes are repeated over time by using often the same data, making it different from what a business does with a consumable resource; hence the explanation data is to be considered like a Fixed Asset.
What about the determination of its financial value?
So far all the value attributed to data derived just from the expenses incurred to acquire a given amount of data to be "elaborated and interpreted" in order to better see and match the customer requirements.
We all bring in mind for instance how much a market survey costs in consideration of the related resources deployed to achieve that data and draw the correct insights.
In other words the actual value of data is limited to the respective outlay we can find in a Period's Income Statement of a business into different accounting items.
But this way doesn't yield the real financial weight of data for the business and buries the concept of Intangibles that in my opinion concerns also data and its continuous use thats makes it just like a Fixed Asset.
As a result of this consideration, the issue coming into play in the application of data to the business worls is to assign a quantifiable value to it.
How to proceed? Which approach can we use?
One of these could be to leverage the Economics concepts such as the economic multiplier effect, marginal costs, marginal propensity to save, and marginal propensity to consume and the Marginal Propensity to Reuse.
This way has been conceived by Bill Schmarzo, creator of the Value Engineering methodology and well-known for Schmarzo Economic Digital Asset Valuation Theorem (where business data come in).
Let's go on step by step.
Schmarzo states that data can be reused across an unlimited number of use cases at zero marginal cost by producing an Economic Multiplier Effect.
What is it like?
By quoting Schmarzo's words, it is the ratio of the impact of an incremental increase in investment on the resulting incremental increase in output or value.
Why and how should it impact the finance "world"?
Think, for instance, of ROI in its most used formula Operating Income/Invested Capital; that means that an incremental increase in investment in Data (if considered like a Fixed Asset), through that multiplier effect, causes an incremental increase in in the numerator (Operating Income).
I won't be here reminding everyone of the applications of ROI in the business world; they are known and in some cases have been already dealt with into other articles of this webpage.
But why and how would this incremental value of output happen?
In order to have the Economic Multiplier Effect for data, businesses can rely on two main drivers.
1) Firstly they need to increase the Marginal Propensity to Consume (MPC) for data, that means in other words make it easier for data users to consume data.
What is MPC like?
The Marginal Propensity to Consume is calculated as the change in consumption (usage or utilization) divided by the change in income (revenue or value).
See below the Exhibit 1 illustrated by Schmarzo
About the determinants and other details of MPC, please be patient not to have a full dissertation here because the goal of this article is different as you can easily see and you may refer to the sources that Bill Schmarzo makes available also online.
2) The second and most important driver of the Economic Multiplier Effect is the Marginal Propensity to Reuse (MPR).
in order to explain what MPR is like, we start first of all from the Schmarzo Economic Digital Asset Valuation Theorem that states that organizations can take advantage out of the sharing, reuse and continuous refinement of the organization’s data and analytic assets since they produce reduction in marginal costs, growth in marginal value and acceleration in Economic Value at each subsequent business operational case that uses the same data and analytic assets.
Having said that here how Schmarzo describes the MPR.
The Marginal Propensity to Reuse (MPR) states that an increase in the reuse of a data set across multiple use cases drives an increase in the attributable value of that data set at zero marginal cost.
We also know from Schmarzo' works that "anything that prevents the reuse and refinement of the data and analytic assets, is destroying the economic potential of data. If there is no sharing and reuse of the data and analytic assets, then the Economic Multiplier Effect cannot take effect."
Bill Schmarzo goes along to elaborate a formula the yields the Economic Value of a Data Set based on concepts I have just illustrated, I invite you to look at in his materials and on which he is still working with the University of San Francisco to take this formula to the next level, as he literally says literally.
After illustrating these Economics concepts, thanking Bill Schmarzo for allowing me to cite his work, that highlight the great "weight" of Data I want to make it clear how in financial terms Data should be considered as a Fixed Asset just since it adds value over time through the continuous use and there is not a total consumption at their first use like it happens instead, for instance, with a Raw Material Unit .
I wish I could show a method of my own conception that returns a value of Data as a Fixed Asset starting from the basics of my job, the Managerial Accounting.
This theory explanation takes as an example the most value-adding data use, that is the use and resue of customer data.
Customer Data Evaluation
First of all data a business owns concerning present customers and potential ones have been taken often by external providers by paying a negotiated price or through market surveys, or the like, made by Sales & Marketing dpt in the execution of their daily tasks.
These costs are recorded into the Period's Income Statement of a business under the account specific of the purchase or hidden into the worked hours of the employees that work on those data.
As a result, to know the value base of customer data to be capitalized we should transfer the cost of the purchase to another account that goes whithin the Asset & Liabilities Statement as an Intangible Fixed Asset.
This step consists of making an accounting entry within a Managerial Accounting system that provides the use of Asset Accounts and not only of Income Accounts, whose exclusive use is typical instead of the most part of Managerial Accounting Systems.
On this occasion, until when this theory will be acknowledged as a General Accepted Accounting Principle, it will be expressed through that specific Accounting system.
Here is an example:
Debit Credit $
Customer Data Capitalized Data Purchased 45,000
Some different kinds of reasoning concern the calculation of the value of the people's work on data that is to made both for the collection of data and for other kind d of work such as entry, cleansing, integration from different sources, refinement, regrouping and of course the use made when getting in touch with clients and potential ones for new deals.
Why are we taking into account this data work?
Since it is intended to create value that means in practical terms to strike a bargain or to enhance, by making the data more significant and consistent, the likelihood to strike it.
If you are not convinced by such a manner please look again at the other kind of clarification by Bill Schmarzo that I quoted in the first part of this article where he shows how use and reuse of Data produce an Economic Multiplier Effect.
Which are the reference elements to evaluate this use and reuse of data?
I prefer making use of a hour-based approach and taking into account just the most important cost category involved as a "tribute" to the principles of clearness and significance: labor costs.
Here is the response:
- The Labor Cost recorded in the Income Statement of the people involved.
- The Total Working Hours of those people throughout the reference period (net of the number of hours not worked on many reasons).
- The resulting Labor Hour Cost.
- The Working Hours spent within the reference period specifically on those kinds of tasks .
For instance, you have not to consider the hours spent on repairing errors on the data concerned.
These elements come into play at the end of a period and the result of some appropriate calculations is an accounting entry to make.
Of course you may make these calculations at the beginning of a period when you are doing an estimate for the future.
Let' go along with an example:
Labour Cost (of the involved dpts) for the period: $ 400,000
Working Hours: 13,600
Labor Hour Cost: $ 400,000/13,600 = $ 29.41
Working Hours on Data = 2,720
Data Work Value = $ 29.41 X 2,720 = $ 79,995
The following step is the record above mentioned that concerns the transfer in the Managerial Accounting system from the Income Statement to the Asset & Liabilities Statement.
Debit Credit $
Customer Data Capitalized Labor Costs 79,995
This kind of evaluation of the Working Hours on use and reuse of Data should be done at the end of each period by taking into account the hours in that specific period and not the cumulated hours overe different years.
At this point the Account "Customer Data Capitalized" adds up to $ 124,995 (45,000 + 79,995).
In consideration that the marginal cost of reusing data is near zero, like Schmarzo tells us, you won't do the Depreciation assessment like you do for the physical assets that cannot be reused across many use cases at near zero marginal cost.
As a matter of fact, there are considerable marginal costs (maintenance costs, utilization costs and other operating costs) associated to reusing a physical asset.
The only kind of "depreciation" that could be done is a devaluation due to an anticipation of an effective cancelation (or proven lack of use) of some data sets held by the business on many reasons such as statutory regulations, change in the market segment and so on.
What about the impact on some important measurement and assessment matters?
If we have a starting Operating Income of $ 600,000 and a "Total Assets"of $ 7,500,000, after these accounting "shifts" the new results will be respectively $ 724,995 for the Operating Income and $ 7,624,995 for the Total Assets.
What does this mean on the Financial Assessment side?
A higher ROI (in its most used formula) with all the consequent different judjments that may derive on the differrent application cases of this indicator.
Initial ROI = 600,000/7,500,000 = 8%
New ROI = 724,995/7,624,995 = 9,5%
The variation in ROI is important in this example and it could be more important, but more realistic, if we abandon the Historical Cost Evaluation Method and move to a "Revaluation Approach".
As a matter of fact the use of Customer Data are intended to generate Revenues that build a margin in on the costs incurred and attributing a different value from the historical one could represent a more correct financial "picture" of this new category of Fixed Asset.
How can we do this?
After carrying out the calculations and the accounting entries just seen, we need to take into account the logic of the financial capitalization where the concepts of Present Value, Interest Rate and Principal and Interest come into play.
I will be more clear by recalling and applying to this case the main concepts:
- The Present Value is the Cost that results from the records you have done so far under the Account "Customer Data Capitalized".
- The Interest Rate could be either prudential (since it would represent the Opportunity Cost), taking the form of WACC, or risky (by charging on the business the prospects about the growth or the decrease of the market segment).
- The Principal and Interest are achieved by applying the compound capitalization formula.
it is advisable, not mandatory, that the reference time for the capitalization should be one year (in this case the capitalization would be equivalent to a simple one). Why?
In consideration that the this operation is to be done on the working hours of every new period at its end, there is a risk to make the work heavy and not very reliable.
In fact if you operate on 2 plus year, for the next periods to be considered you need to take again at the following period-end capitalizations the same data sets also "worked" in the previous periods, already capitalized at their ends, and you will have to update their value through a new interest rate.
Simply put, when considering 2 plus years, the yearly estimation work through the compounding should become heavy.
Moreover the rate at which capitalizing is more reliable when set for the year to come.
Taken for granted this why could a CFO decide to resort to the compounding?
One of the reasons can be found in some regulatory disposition that provides the keeping of some category of data only for a specific number of periods and as a result the manager might like to see their financial value for the whole period and its impact on the financial performance measurement and assessment.
I remind in this regards that the business needs an appropriate operating system able to distinguish th different data sets andthe hours worked on each of them; at the same way I remind all of us of the principle stating that he cost of the information acquisition should be lower than its benefit.
Here is an example of the compound capitalization made by taking the previous amount as a starting base.
Present Value of Customer Data Capitalized: $ 124,995
Interest Rate (Market Value Growth Prospects of next year): 5%
Time: 1 Year
Principal and Interest = 124,995 X (1 + 0,05) = 131,244.75
Revaluation = 131,244.75 - 124,995 = 6,249.75
The further record is the following:
Debit Credit $
Customer Data Capitalized Revaluation 6,249.75
In the end the value recorded into the account Customer Data Capitalized adds up to $ 131,244.75.
The benefit of this "Market" (improperly named) approach is the clear perception of the realistic value of Customer Data; that is more evident when the the prospects of the market are negative and the interest rate could have a minus sign, yielding as a result a decrease in the final balance of Customer Data Capitalized and corrisponding to an amount be recorded as a Devaluation.
For instance, let's suppose that the prospects consist of a 5 % decrease in the Market Value.
Here is the new Accounting Entry in our example.
Debit Credit $
Devaluation Customer Data Capitalized 6,249.75
The final value of Customer Data Capitalized is $ 118,745.25
The application of the Revaluation approach affects of course also the ROI calculation that is even more different from the one achieved by considering the Customer Data as they are treated with the traditonal accounting standards.
In this regards, be aware that the account Revaluation (or Devaluation) doesn't go into the Operating Income and as a result into the numerator of ROI.
In conclusion, the digital assets and customer data are nowadays such a great value base businesses can rely on that even the accounting methods cannot neglect them and keep on considering them in the traditional way.
In this article we have tried to prove these concepts and to give them a practical solution that may help the business managers to a have a better decision-making support.
Suggestions, requests for clarifications, opinions are welcome on Page Contacts, as always.
58. How to link the Capital-Investment Decisions to the Strategy
The capital-budgeting projects are evaluated mainly in their financial aspect and that causes us to choose amongst the most suitable method we know: those which make use of the Cash Flows such as the Net Present Value, Internal Rate of Return, Payback Period and those that look at at ROI and other financial indicators.
Most times the business' top managers take the adherence of the the capital-investment projects for granted, based on their feelings and experience, so that people charged to measure the numerical advantage of one project compared to the ohers worry just about the financial aspect.
As a matter of fact it isn't always like this and it should be very advisable to make use of multicriteria techniques that help managers assess different projects under differerent points of view.
More importantly these methods would allow to assess the compatibility with the business' strategy and, if you would go deeper, even the insertion of the projects selected into one or more of the perspectives of the Balanced Scorecard in order to monitor the execution of the strategy over time.
We all know the BSC breaks down usually into four areas known as Financial, Learning and Growth, Customer and Internal Processes perspectives; the way you can insert and use the projects chosen as a metric there is not the object of our dissertation that focuses intead on how to evaluate the capital-investment decisions under aspects of different nature.
Nevertheless the purpose of this dissertation is one these multicriteria techniques and we are going to focus on the most useful one.
The AHP, Analytic Hierarchy Process, whose purpose is quantifying all of your evaluation criteria, both tangible and intangible ones and whose main charateristic is to make a pairwise comparison of the criteria
AHP makes you assign a numeric value to the evaluation criteria to "weigh" their relative importance. The higher the number assigned to one criterion the higher its importance.
In the end you may see to which extent each project is in line with your strategic goal.
Let's go on step by step to analyse this method and its usefulness
What are the main phases of AHP?
1. Define the main goals of the business that can be named as criteria to be considered for the decision making in the choice of the project.
For instance, we spot these three criteria: Cash Flow creation, Share Growth in the market segment, Higher Quality of the Products.
2. Assess the relative weight of each decision criterion.
It consists of designing a pairwise comparison matrix where comparing criteria in a column on the left to criteria on the top.
After that you may enter a number into each cell whose function is to quantify the dominance of one element compared to another.
If there are many people taking part in this evaluation process you should resort to a sort of means to express their judgment.
3. Calculate the weights of the criteria.
Divide each value by the column's total value and than sum the values in each cell contained in the respective row.
At the end calculate the average of each criterion to assign it a weight that you can compare.
In this way you will have achieved a hierarchy of priorities/criterion. The higher score corresponds to the option the most in line with your goal.
4. Check out the consistency of the judgments.
If in the matrixes a participant states Cash Flow creation is more important than Higher Quality of the Products. That other criterion (Share Growth in the market segment) must be consistent with that judgment.
Let's make an example:
If Cash Flow creation =4 x Higher Quality of the Products
Higher Quality of the Products =1/4 x Share Growth in the market segment
That means that "Share Growth in the market segment" should have the same value as the "Cash Flow creation" and have 1 in the cell of the matrix when compared to it. If this doesn't happen, the data is inconsistent and the calculations must be made again before interpreting the matrix.
5. Analysing the projects in competition and compare them with each other with reference to the criteria by following the same "rules".
6. Obtain the overall priority score of each project.
As I repeat often the best way to make use of when explaining a concept consists of making an example.
Here it is:
- Let's start from the goals described earlier
Cash Flow creation
Share Growth in the market segment
Higher Quality of the Products.
- Then you can assess the relative weight of each criterion by assigning a score in the following matrix.
|Cash Flow creation||Share Growth in M.S.||Higher Quality of Product|
|Cash Flow creation||1||1/3||2|
Share Growth in M.S.
|Higher Quality of Product||1/2||4||1|
- Divide each value by the column's value.
|Cash Flow creation||Share Growth in M.S.||Higher Quality of Product|
|Cash Flow creation||0.22||0.06||0.62|
|Share Growth in M.S.||0.67||0.19||0.08|
|Higher Quality of Product||0.11||0.8||0.31|
- Calculate the average of each criterion to assign it a weight that you can compare.
|Cash Flow creation||(0.22+0.06+0.62)/3||0.3 (30%)|
|Share Growth in M.S.||(0.67+0.19+0.08)/3||0.313 (31.3%)|
|Higher Quality of Product||(0.1+0.8+0.31)/3||0.403 (40,3%)|
We can see from these values that the Higher Quality of Product is the most important criterion since its weight adds up to 40,3%.
As you may have noticed each of the three criteria belongs to a different area concerning the perspectives of the Balanced Scorecard we have seen in the lines above so that the project being selected may be inserted into one of the specific areas (how to do it can be discoverd by writing on page Contacts of this website).
Cash Flows belong to the Financial Area, Share growth to the Customer Perespectives and finally the Higher Quality of Product to Internal Processes.
Don't forget that alle the BSC's areas are connected so that the results of a particular Critical Succes Factor (in our case for instance the Higher Quality of Product) belonging to a specific area affects the results of another CSF belonging to another area (in our case the Share Growth in the Market Segment) that in its turn impacts the ultimate CSF (in our case the Cas Flows) belonging to another area.
Let's turn back to the steps.
- Analysing the projects in competition by comparing them with each other with reference to each of the criteria by following the same "rules" in order to obtain the overall priority score of each project.
A) In our example we can compare the different projects in competion (let's suppose A, B, C) against the Higher Quality of Product, then against Cash Flow creation criteria and finally against Share Growt. The priority score of each project is obtained in the same way we have seen previuosly.
|Higher Quality of Product|
|Project A||Project B||Project C||Priority|
|Cash Flow Creation|
|Project A||Project B||Project C||Priority|
|Share Growth in the M. S.|
|Project A||Project B||Project C||Priority|
B) Multiply the score of each project with reference to each criterion by the weighted priority of that criterion.
Project A = Weighted score of the Project A for the "Higher Quality of Product" by the Weighted score of "Higher Quality of Product" = 0.20 x 40.3 = 8.06
Project A = Weighted score of the Project A for the "Cash Flow Creation" by the Weighted score of "Cash Flow Creation" = 0.35 x 30 = 10.5
Project A = Weighted score of the Project A for the "Share Growth in the M. S." by the Weighted score of "Share Growth in the M. S." = 0.36 x 31.3 = 11.27
Project B = Weighted score of the Project B for the "Higher Quality of Product" by the Weighted score of "Higher Quality of Product" = 0.16 x 40.3 = 6.45
Project B = Weighted score of the Project B for the "Cash Flow Creation" by the Weighted score of "Cash Flow Creation" = 0.33 x 30 = 9.9
Project B = Weighted score of the Project B for the "Share Growth in the M. S." by the Weighted score of "Share Growth in the M. S." = 0.37 x 31.3 = 11.58
Project C = Weighted score of the Project C for the "Higher Quality of Product" by the Weighted score of "Higher Quality of Product" = 0.59 x 40.3 = 23,77
Project C = Weighted score of the Project C for the "Cash Flow Creation" by the Weighted score of "Cash Flow Creation" = 0.31 x 30 = 9.3
Project C = Weighted score of the Project C for the "Share Growth in the M. S." by the Weighted score of "Share Growth in the M. S." = 0.77 x 31.3 = 8.45
C) Get the overall priority of each project by summing up all the new weighted criteria scores and rank the projects.
Project A's Overall Priority Score = 8.06+10.5+11,27 = 29,83
Project B's Overall Priority Score = 6.45+9.9+11.58 = 27,93
Project C's Overall Priority Score = 23.77+9.3+8.45 = 41,52
57. A full view on the use of the Value Chain Analysis
When we talk about the Value Chain our mind recalls all we have learnt from Michael Porter about the breakdown of a business' activities in nine categories regrouped into two major ones according to their aptitude to create value to the customers.
Those two are Primary Activities (consisting of Inbound Logistics, Operations, Outbound Logistics, Sales & Marketing and Service) and Support Activities (that include Procurement, Technology, HR and Infrastructure).
The definition of these categories is well known and easy to be found into the rich literature.
In the real life of the businesses each of those categories includes many other subcategories that vary according to the industry and whose importance depends on the strategy chosen.
Prior to entering the very core of the article, let's make a step back by recollecting the meaning of Value Chain.
It is the full range of business activities needed to create a product or service and delivering it to the customers by ensuring the satisfaction of them.
I talk about satisfaction since we include the after-sales activities too.
Of course the width of the Value Chain varies based on the kind of industry and on whether the company is a manufacturing one or a service one.
Why is it so important to a top manager bringing in mind the composition of the Value Chain?
Even more, why and when should a top manager should require a Value Chain Analysis?
We all us know that the general purpose of a value-chain analysis is to increase the business activity efficiency in order to deliver maximum value for the least possible cost.
In my point of view this sentence means the obvious since each manager's decision is made for increasing the bottom line in one way or another and that's why I prefer explaining the need for the Value Chain Analysis by focusing on its goals: External and Internal.
Let's start with the External goals.
The company may be facing such critical moments in its life that it should consider or reconsider its core activities.
What are these moments?
The first is the "birth" that must be preceded by a detailed analysis of the Value Chain of the industry where the business wants to operate.
In other terms it is the step prior to a SWOT analysis that requires that the managers should break down the value chain of the reference industry and understand the positioning of the company in each of the activities identified with regards to its competitors.
As a matter of fact, the company should have a competitive advantage to be profitable and should go deep to see which activities can ensure that goal.
So it may also decide, after a full SWOT analysis, whether to deal with all the activities of the Value Chain of the related industry or to focus just on some of them (for instance it may make the decision of dealing with the Design of a product family and collaborating with external manufacturers).
There isn't just the birth of a business as a moment to conduct a Value Chain Analyis.
The top managers should bring in mind the need for it even throughout the Business's life after the start-up and check out periodically whether it's better going along with the activities "chosen" (boosting the most profitable) or outsourcing some of them.
The decision is often strategic in a sense that even when the costs of a kind of activities are increasing compared to the others, the top managers may decide to keep them because they are strategic to the business's goals and prefer to try to look for ways to improve them by making them more efficient.
Let's move on to the internal objectives.
Have you ver considered the Value Chain Analysis as an important indicator of the appropriateness of a Cost Accounting/Management System?
Suppose that the decision about the choice of the activities to be carried out or not doesn't come into play but something different than expected about the costs comes up from the internal reports.
Just to make a simple example, let's take into account an Operating Income Statement where the lines between the Gross Margin and the Operating Margin (or if you want to call it differently Operating Income) refer mainly to the support Activities (except for Sales ones) characterized by the General Overheads.
When you exhamine an Actual report compared to a Budget/Forecast and the Operation Margin Ratio (Operating Margin/Revenue) varied in a widest extent than the Gross Margin Ratio (Gross Margin/Revenue) that is related to the Operations (the main primary activities) and that are more directly and correclty linked to the Volume than the remaining activities, some conclusions have to be drawn.
The external conditions being equal or similar that means you have estimated the Overheads erroneously based on wrong cost drivers and you should reconsider the way to predict them by relying on a new cost management system.
In other terms, if you are using a Volume-Based Approach you should pass on an Activity-Based one particularly when the difference between Estimate and Actual is so high; if you are already making use of it, that difference means that you are using the wrong cost drivers and have to resort to other methods (statistical) to spot the right ones.
May be the use to this purpose of the Value Chain Analysis is not so straightforward but these lines have been written to explain how the knowledge and the analysis of all the activities af a Value Chain can present more aspects and facets than thought.
See you at the next issue.
56. Labor-Intensive Businesses and the Learning Impact on Cost Estimation
Capital Budgeting, Planning and Forecasting, Performance Evaluation are just some cases when a different impact can be observed with reference to the human factor productivity affecting its output in a different way over time at different stages of a business process.
The same thing you can gather when you have to do with Make-or-Buy decisions as well as th Cost-Volume-Profit Analysis.
What are we talking about?
Here is a classic example applied to the Budget Process:
Budget Output: 1,200,000 units
Average Direct Labor Time per product unit (including the inefficiency rate): 5 minutes
Total Budget Working Hours: 1,200,000 x 5 = min 6,000,000 = hrs 100,000
considering a Learning Rate at 90% (see below the definition and the calculation of the Learning Rate) after 6 months and supposing a steady monthly production output.
Total Budget Working Hours: 600,000 x 5 min + 600,000 x 4 min = min 5,400,000 = hrs 90,000
This is a very short and "rough" example of the importance of the Learning consideration in the cost estimation processes of a labor-intensive firm.
Yes, we are considering the case when Labor is the main factor that characterizes the business processes and produces that product/service to deliver to the customer.
Furthermore, when the tasks performed are so repetitive that a productivity enhancement is a natural effect of their continuous execution over time, it's easily understandable how in the business estimation processes of all kinds you cannot neglect Learning and the best way to calculate its impact.
One may think that this concept applies only to the manufacturing industry but this assertion is belittling. I should say that it is good for any kind of a labor-intensive business, ranging from manufacturing to services and last but not least for the software companies where the developers are as faster in the programme unit (usually a given number of code lines; for more specifications Page Contacts) creation as "bigger" their work is.
First of all, it is important to see the formula usually accepted one makes use of for calculating how Learning impacts the time and the costs of the Labor resource.
It is the following:
Y = Cumulative Average Time per Unit at the Double Output
a = Time of the first Unit at the reference Output
Taking the numbers from our previous example with Learning:
Y = Total Time at the Double Output/ Double Output = 5,400,000 min/ 1,200,000 = 4.5 min
a = 5 min
Learning Rate = Y/a = 4.5/5 = 0,9 = 90%
Of course at a some point the Learning ends up and no further Unit Time reduction can be expected except for those cases when change in the business process happens such as a new technique, some change in the labor mix and so on.
One thing is certain: when all is unchanged, the productivity enhancement ends up and the Learning Curve becomes a straight line.
Another point to be highlighted is that the Learning impact measurement through the Learning Rate seen in the lines above, that define it as the percentage by which the Average Unit Time decreases as the Output doubles, relies on the assumption that the productivity the enhancement goes on constantly.
That is not always true since there can be cases that see enhancement in a different way; for instance you can have a decline in the Average Unit Time by a given percentage after some Output Units and a different percent decline after working a furher number of Output Units.
However the most suitable and realistic Learning model is not the focus of this article (if you want to dig deeper into this, you can ask info on page Contacts), but we prefer focusing on the need for the analysts and controllers to take it into account when they are estimating costs, their business is a labor-intensive one and the tasks performed are repetitive.
This need becomes more true and strategic when the company has a Cost Leadership Strategy and even more when the Pricing model is based on a Mark-up to the costs.
How can you disregard an appropriate Learning Curve Analysis!!?