Topics
23. The cost management and the customer-driven value model (PREVIEW)
The full dissertation is available on page "Shop" of this website
http://www.thestrategiccontroller.com/2/shop_3813594.html
How many times do you state or hear that you are adopting the concept of Value in addressing your customers?
You couldn’t tell us how many times you did it!
Nonetheless, the revenues are not increasing as you wish and the market share is the same as before or even smaller.
As a matter of fact, if you are really creating value to the customers, at the same time you are creating revenues for your business.
Even if you faced with a value engineering program since the second year, more or less, the profitability has been decreasing in your customer segments because the “reasoned” cost cuts have run out of their effects.
Then, what is happening?
In my opinion, the answers could be two:
1) You are adopting an internal perspective of Value.
2) You are not reinvesting the savings from the “waste” activities into the value-add ones.
What do I mean by internal/external perspective of value?
If, when deciding which product/service features are important to the customers, you keep into account the opinions of the business managers that sort the needs/requirements of the customers on the basis of their experience, knowledge of the market and give instructions to the engineering/production to develop the processes/products and manufacture the products accordingly, then you are adopting an internal perspective of Value.
This approach is also typical of some recent cost management techniques like Target Costing that try to cut costs and at the same time to keep the functionalities of the product as before...............
..................You’ll have to remember what the customers won’t ever pay for and what is in excess to the normal activity.
That’s why a new kind of waste takes importance: the excess spending of the business on Value-Add activities tied to specific value features (customer preferences).
In order to explain better this concept I’ll advance something you will be able to understand better further on.
Let’s imagine that you manufacture the product A, which has just 2 Value features (Price and Delivery Speed) according to the customer perspective.
The costs of the activities attributable to these preferences are $ 12,000 and 10,000.
The revenues traceable to them are respectively $ 24,000 and 8,000...................
..................As you can see in table 2, the customers of these two segments have different requirements about the features the product should have to make their purchase become true,.
In this example the results as a whole could be guessed by taking into account the clear difference in the buying power of the two customer categories, but going into the details you can give a weight to these data.
In the view of the first segment, so-called “Writing pen” customers, the quality of the writing is by far the most important feature (55%), followed by the chance of choosing from more pen models (Model Availability - 15%) to purchase (in our example just two: fountain pens and ball pens) and the cheapness of the product (Price feature - 14%).
About the second segment, High-end pen customers, a different...................
.............In fact, the standard deviations are respectively 0,84 for the former market an 1,13 for the latter one.
That means, if the firm would modify something in its activities intended for the first market, the modification should be smaller than those intended for the second kind of customers.
Taking into consideration the two kinds of strategy adopted, the firm succeeded with its efforts more in the Price leadership strategy used for the Writing pen customers than in the Differentiation strategy suited for the High-end customers.
This conclusion is good for the past period, but for the choices to be made about the future?
How should the company reason?
Let’s look into every value...................
.................That’s good, as we have seen for each value feature, in looking on the most or the least effectiveness of within the Value-Add activities.
We want to go beyond.
For instance, either if you want to or you must choose the most profitable segment for any decision the business is to make or any further benchmarking it wants to do in order to have more details, the following analysis will take an higher value if it includes the total costs of the company.
When I write about the total costs of the company, I mean also those of the resources consumed in all the operating activity categories we have before examined.
We should do this by.............
22. What if you compete on timeliness and speed to customers?
The competitiveness of many firms is based on their ability to meet the customer demand on time and more constantly than the other competitors.
This concept is valid for both the manufacturing firms and the service ones.
To turn this strategy into reality and in a profitable way there are some steps to follow before it’s too late, before to face with the complaints from the customers and lose in the medium run a share of the market.
It’s better to make a sort of an internal strengths-weaknesses analysis before to choose a strategy and then realize not to be able to fulfil it.
The strategy must be well thought on the base of these internal factors after examining the external ones, the so called opportunities and threats.
When some constraints emerge the ways to be followed are two: either eliminating these constraints or throwing off the old strategy and addressing other success factors of the market, if any.
How can these issues be identified and then eliminated?
A deep knowledge of the internal processes is needed, in the meaning also of time spent on them and compared to the time resulting from the requests of the customers, of the extent to which every process is manned and its adequacy to what is determined from the market.
In other terms you must carry out a detailed analysis as deep as you can, starting from the potential share of the whole market you address on the base of your strategy.
Some concepts should be recalled before to show how to go along.
The Cycle Time is the time between the receipt of an order and the delivery of the output to the customers.
Inside that there is a lapse devoted to the “execution” of the activities performed to manufacture the product or carry out the service (if service businesses), called processing time.
The ratio of Processing Time to the Cycle Time is the Typical Process Efficiency (TPE)
The terms used to identify these indicators change often from firm to firm. In particular the Cycle Time can be referred to as Lead Time.
In any case when the strategy of a firm is based on the timeliness and the speed of the delivery, the TPE target is 1 or in a realistic way one attempts to close that value as much as possible.
Nonetheless, a good ratio could be not enough to meet the requirements from the customers that need a fast delivery.
In other terms the processing time must be reduced.
The steps to help the managers pursue this objective are the identification of some constraints (here called bottlenecks) and the choice of the way to remove them.
You can proceed by using two methods: one simple and immediate, the other more detailed.
The former comes from the German management school and is called Takt Time.
There 4 steps to be followed:
1)Calculate the Available Capacity within the product/service line in a period of time. It’s advisable to consider the Practical one (in terms of labour hours or machine hours if capital-intensive business), that is the theoretical capacity less the holidays, the programmed maintenance, the set-up time.... and all the statutory and appropriate breaks to ensure conditions of normal efficiency.
2) Estimate the units of output demanded from the customers in the same period of time.
3) Calculate the ratio of the Available Capacity to the Output units, getting as a result the time (Takt time) per unit needed to meet at once, out of the inventory, the demand from the market without potential delay in the delivery to the customer.
4) Compare the processing time activity by activity per unit to the Takt Time.
If the company records a negative variance from the reference time, it means a failure to satisfy the requirements of the customer about speed and timeliness of the delivery/providing of the product/service .
It had be better keep at a slightly lower level.
An example will clarify the concept:
1) Monthly Available Capacity: 400 000 (labour time in seconds)
2) Units to be sold: 200 000
3) Takt time: ½ = 400 000 / 200 000 = 20”
Processing time by Activity per unit:
Activ 1: 20”
Activ 2: 18”
Actv 3: 28”
Actv 4: 20”
Total Processing Time: 86”
As you can see, the bottleneck is the Activity 3 where the time spent on the product/service is higher than Takt time (20”), meaning as a result an accumulation of work to be completed that translates into a delay in making all the output units needed to face with the customer demand at the requested time.
When this situation occurs you can resort to three solutions:
A) Increase the finished product inventory (if your company is a manufacturing one), incurring all of the respective inventory costs.
Moreover, if you are adopting a JIT policy with your suppliers, your efforts will be made partially vain by the increase in Work-in-process and fFnished product inventory
B) Try to drop the time of the bottleneck (activity 3) below the Takt time.
In this regard you can either redesign the typical process or the product/service being aware of not to change the value added to the customers.
C) Adding capacity to the bottlenecks.
As I wrote before, there is another more detailed way to identify the bottleneck that, in my opinion, is the most appropriate to make another solution more visible: the shift of the capacity from one activity to another if possible.
I say if possible, because of the features of the activity to be performed.
For instance, if the workforce needed should be specialized in the single activities, it’s clear how the shift of the excess capacity at a single activity to the bottle neck is not realistic.
Here is an example of the detailed way.
Alpha ltd is a company which produces three products: A, B, C.
The activities performed for every product are 4 (to simplify they are in common to each of the three products).
Alpha ltd has estimated for the next month potential sales of 6,000 units broken down this way:
January 2019 |
Product A |
Product B |
Product C |
Estimated Sales (Units) |
3 100 |
1 900 |
1 000 |
In order to see if the firm will be able to meet the customer demand as timely as possible, you’ll have to take into account the processing time, activity by activity, for each product together with the available capacity (in this case labour hours).
January 2019
|
Minutes for Product A Unit |
Minutes for Product B Unit |
Minutes for Product C Unit |
Number of workers at 32 hrs a week |
Monthly Available Capacity (1) = 128 hr x 60 x employees |
Monthly Capacity Needed by the Market (2) * |
Variances (1-2) |
Activ 1 |
35 |
25 |
30 |
30 |
230 400 |
186 000 |
44 400 |
Activ 2 |
31 |
24 |
28 |
20 |
153 600 |
169 700 |
-16 100 |
Activ 3 |
15 |
16 |
18 |
22 |
90 112 |
94 900 |
- 4 788 |
Activ 4 |
20 |
22 |
20 |
19
|
145 920 |
123 800 |
22 120 |
Total |
101 |
87 |
96 |
91 |
620 032 |
574 400 |
45 632 |
* The Monthly Capacity Needed by the Market (2) by activity is calculated this way:
Activity 1 = (Min for Product A Unit X Product A Estimated Sales) + (Min for Product B Unit X Product B Estimated Sales) + (Min for Product C Unit X Product C Estimated Sales) = (35 x 3100 + 25 x 1900 + 30 x 1000) = 186 000
As you can see Alpha ltd has a total excess capacity by 45 632 minutes, but with reference to the single activities two out of four have a deficit.
In other terms those activities (3 and 4) are bottlenecks that require some adjustments to meet the customer demand.
Alpha could (amongst other solutions as I wrote above) shift the workforce from the activities with slack time to the bottlenecks if the people needed to perform them shouldn’t be very specialized.
You can also, on the base of these data and the Unit Price and Unit Variable Cost, determine the most profitable Product mix (topic dealt with at the article number 7 of this website).
I want to make notice that the latter profitable mix in case of constraints enables to face partially with the market demand by limiting the damages but in the long run the market segments sensitive to the critical success factor of the timeliness will “punish” the failure of the company to fulfil the orders on time.
In sum, the companies which choose to compete on the base of timeliness and the speed of the fulfilment of the sales order and, If manufacturing firms, at the same time out of inventory, should strictly analyze every aspect of the Cycle Time (Lead Time), that is the time spent between the receipt of an order and the delivery of the product/service.
First of all, that means looking into the processing time needed to bring forth the product/service and the available capacity on hand.
In so doing, you can take the corrective actions before it’s too late, before to fail the customer expectations.
21. The Strategic Fixed Overheads
The strategic side is in every decision, number, transaction concerning the mere management control activities.
Even about the most apparently operational reasoning there are some settings resulting from the business strategy.
If we use the procedures in place forever, regardless of their consistency with the objectives of the company and its way to achieve them, we are not controllers but robots.
What I am saying is true even in the decision of the activity driver amount (labour hours, machine hours,....) over which you can spread the fixed industrial overhead costs when you are using a standard cost system and the full cost method to product-costing purposes.
The strategy of the business affects the amount chosen, for instance when it is a price leadership one and the managers make the price based on the cost of the products.
On the other side the strategy of differentiation can require new investments into better capacity-related resources and if you haven’t the right information on your capacity, you can invest money into new machinery and plants without knowing that you have already in place some available fixed assets that could be just “modernized” or just “manned” in the proper way.
I want to highlight some important points.
First, every aspect dealt with from now on concerns exclusively the strategic side and not any accounting “strategem” used by managers of manufacturing businesses in order to increase by some point the profit in the income statement of the fiscal period end.
In fact, in a standard cost accounting system, depending on how some variances of fixed overhead costs are disposed of at the end of the fiscal period, you can have the chance of increasing the income through the setting of one activity driver amount or another that impacts the value of the ending inventory.
This respect falls beyond the scope of this article.
Second, the following issues concern the fixed industrial overheads, meaning by that the expenses incurred to finance the typical activity of the firm.
We’ll call them for short fixed overheads.
The issues being dealt with are valid also for service organizations.
How is the choice of a number able to facilitate the pursuing of one strategy rather than another one?
First of all, It need to remind how the standard cost system works with regard to the fixed overheads:
- At the end of each reporting period a standard allocation rate, set at the budget time, is applied to the output of period.
Example:
1 SAR (Standard allocation rate): 25.
2 AQ (Actual activity driver amount: machine hours): 8000
3 Standard Fixed Overheads Applied to actual production: 1 x 2 = 200000
If you want to reason also in terms of output units, you’ll have to divide the AQ by the standard quantity of driver per unit (SQ).
In this case the number of machine hours per unit.
SQ: 5
AQu (output units): AQ/SQ = 8000/5 = 1600
Standard Fixed Overheads Applied to actual production: SAR X AQu X SQ = 25 X 1600 X 5 = 200000
- The variances calculated with reference to the Actual values are recorded at the end of each reporting period and cumulated till the end of the fiscal period (year) when they are disposed of according to some appropriate methods.
How do we set the standard allocation rate (SAR)?
There are five stages to be followed:
a) Define at the budget time the money to be spent next year for capacity-related costs not varying when the output of the firm varies. Fixed industrial overheads.
b) Take an activity driver (measure) that enables to allocate the fixed overheads to the products. Lacking a direct relationship with the output, a choice of a measure that represents the adherence of the fixed overheads as much as possible to the activity of the firm is needed. We can choose from different drivers (in a labour-intensive context the labour hours are the most suitable; in a capital-intensive one the machine hours are the best representative factors).
c) Choose a denominator volume, the amount of the activity driver over which we can spread the fixed overheads (number of labour hours, machine hours).
d) Calculate the Standard fixed overhead allocation rate, by dividing the value determined at point “a” by the amount of the point “c”.
Example:
a) Budgeted fixed overheads: $ 250,000
b) Activity driver: machine hours
c) Budgeted Denominator volume: 10000 machine hours
d) SAR (standard allocation rate of fixed overheads): a/c = $ 25
I make you notice that when there are different product lines of which some with special fixed costs (for instance depreciation of machinery exclusively dedicated to a product line), some adjustment must be made in order not to burden other product lines with expenses that are not incurred for them.
This aspect is particularly important not only for product-costing purposes but as well when the department performances are assessed on the base of the full cost and it doesn’t sound fair to charge them also with costs incurred for other products of other "industrial" departments.
Which of the steps above described is crucial to our analysis?
The point c is the step that is the informative base of strategic decisions, either pricing decisions or investments ones or, yet, performance evaluation of operating departments.
In fact, the higher the denominator volume the lesser the SAR and the lesser the fixed overheads applied to the actual production.
Which are the criteria used to choose an appropriate base?
There are 4 criteria:
- Theoretical capacity: the maximum number of machine hours (or other driver) available, 365 days a year, 24 hours a day.
- Practical capacity: the maximum number of machine hours (or other driver) available net of holidays for people, programmed maintenance and other “breaks” needed to repair usual inefficiencies of the product line (business).
- Normal capacity: the average forecasted driver volume for a time ranging from 3 to 5 years on the base of the demand for the output of the firm.
- Budget capacity: the forecasted driver volume for the coming period on the base of the demand for the output of the firm.
In general the budget capacity is the lower (together with the normal one if in a decreasing demand period) of the four criteria and is not stable over the years, in a sense that is subject to the periodic fluctuations of the demand from the customers.
This aspect is negative when to fix the price on the base of the costs incurred since every year the managers are obliged to modify the prices, giving also the customers a feeling of uncertainty.
Always with reference to the pricing decisions, it is even more negative when it is lower than the practical capacity since it charges, through a higher SAR (standard allocation rate), the products with a greater share of fixed overheads including the costs of the unused capacity.
That will result in higher prices and a potential decrease in the demand, that in its turn will lower the output expectations for the coming period and, through a new higher SAR, will cause the manager to increase even more the prices (DEATH-SPIRAL effect).
As to the positive aspects of such a method we can consider the simplicity of it, particularly felt when in the company there isn’t a deep information system that allows the needed calculations.
Dealing with the Death- Spiral Effect, if the company is adopting a price leadership strategy, you’ll see very easily how the choice of a budget capacity over which spreading the fixed overheads could be “deadly” in particular in periods of uncertain demand.
It is the most used by the managers regardless of their strategy and that is so impacting these days when the indirect costs have taken an even larger share of the total costs.
In my opinion the most useful and realistic capacity to be considered is the practical one for many reasons:
1. It can be interpreted as the cost of unused capacity.
2. The cost of unused capacity of point 1 is the base for investment decisions on capacity-related assets. In fact, showing an unused capacity and giving it a value, it works as an additional informative support to the managers who must decide whether to purchase other fixed assets or taking advantage of the existing and unused ones.
3. It gives a steady base over the time on which to set the price in case of pricing on the costs and the company is not a price taker from the market, facilitating as a result the work of the managers and giving a feeling of certainty to the customers.
4. It allocates to the standard cost of an output unit a lower share of fixed overheads, when the practical capacity is higher than the budget one, thereby reducing the base over which apply the price and regardless of the fluctuations of the demand. In other terms the customers will not charged with the inefficiencies of the firm and the death-spiral effect is avoided.
5. It’s consistent with the accounting rules for external reporting set by both GAAP and IFRS.
The point 5 takes a facet of external and fiscal importance so that I won’t focus on it further.
The crucial point to this analysis is, as you have understood, the unused capacity.
How is the cost of unused capacity assessed?
It is calculated through the Fixed Overhead Production Volume Variance, that is the difference between the budget Fixed Overhead amount and the Standard Fixed Overheads applied to the production.
The latter term is achieved through the SAR (Standard Allocation Rate).
A numerical example, showing the cases firstly of the use of the Budget capacity and secondly of the Practical one will clear the concept of unused capacity cost.
INPUT DATA
1. Practical Capacity: 12000 Machine hours
2. Budget Capacity: 10000 Machine hours
3. Actual Capacity: 8000 machine Hours
4. Actual Fixed Overhead Spending: $ 280000
5. Budget Fixed Overhead Spending: $ 250000
6. AAR (Actual Allocation rate): 4/3 = 280000/8000 = $ 35 per unit
7. SAR practical (Standard Allocation rate with Pract capacity): 5/1 = 250000/12000 = $ 20.33 per unit
8. SAR budget (Standard Allocation rate with Budget capacity): 5/2 = 250000/10000 = $ 25 per unit
9. Standard Fixed Overheads applied to actual production with Practical Capacity: 7 X 3 = 20.33 X 8000 = $ 166666
10. Standard Fixed Overheads applied to actual production with Budget Capacity: 8 x 3 = 25 X 8000 = $ 200000
Exhibit 1 (Denominator volume: Budget Capacity)
Budget Fixed Overheads (5) |
Standard Fixed Overheads applied to Actual Production (10) |
Production Volume Variance (5 -10) |
250 000 |
200 000 |
50 000 |
In this case The Production Volume Variance shows the extent to which the variance between Budget Output (10000) and Actual Output (8000) has caused a lower or higher absorption of the Fixed overheads in comparison to the forecast.
The absorption is higher when the Actual Output is greater than the Budget one, meaning that the share per unit of Fixed overhead cost is consequently lower with the following positive effect on the total unit cost.
In this case we also talk of Overapplied Fixed Overheads as a result of a different actual production.
The opposite happens when the Actual Output is smaller than the Budget One (the case of our example). Here we can also talk of Underapplied Fixed Overheads as a result of a different actual production.
Exhibit 2 (Denominator volume: Practical Capacity)
Budget Fixed Overheads (5) |
Standard Fixed Overheads applied to Actual Production (9) |
Production Volume Variance (5 -9) |
250000 |
166666 |
83334 |
It’s clear how a different volume of activity driver causes a different Production Volume Variance that takes the same meaning as that of Budget capacity one about the “direction”, whether negative or positive, but under the Practical capacity hypothesis can be interpreted also as an indication of how much the unused capacity costs.
I say that can be interpreted because there are other more detailed ways to calculate the unused capacity cost that require some other information and bring about the breakdown of unused Capacity into Idle Capacity and Non-productive one.
The article here dealt with is intended, once more in the website, to show how even the choice of a number from a range of options has its own informative value, different from those of the remaining options.
Furthermore, the choice of this number is a result of the strategy of the business that has to adapt all of its information tools to its objectives and the “path” chosen to achieve them.
The topic of the procedure used to set the Fixed Overhead in a Standard cost system is not so simple and if you want to ask for any more details you can get in touch with me by filling the contact form on this website.
20. The assessment of the State of the Work according to the activities
The operating processes of a company working on projects in particular when carrying out over a long time are for sure very different from ones of a company that manufactures products or provides services in a short time.
This difference shows up also in the work of the controller and the project manager that is affected in a typical way in the cost estimation process, in the assessment of the output done at a specific date and in the variance analysis.
Again, the main reason is the length of the time needed to accomplish the project whereas in a mass production company there is a cycle time that takes minutes, hours, days to give out a product/service to be delivered to a customer.
Many are the topics we could deal with about the project management control but we prefer focusing now on the most typical of all: the state of work.
When a project stars you’ll have to set some regular time points (called Time Now) over all the duration of the project when to assess both the technical and the financial state of the work done until that date.
If it is a multiyear project, the right periodicity might be a monthly one.
Of course that choice depends also on the kind of work.
On the financial side at the chosen date you must measure the costs incurred and evaluate the output done in order to see if the budget is being respected.
The hard point on issue is how to evaluate the output done to a specific date.
First of all, you need to see how much of the work to be delivered at the end of the project has been gradually (to the various check dates) done.
This value is expressed in percent terms thereby applying the same percent to the estimated total revenues and achieving the money value of the output to the date.
How to calculate that percentage?
All of that said above needs to be made for every Work Package (WP).
You know the WP is that part of the final project corresponding to some specific activities put in place by a given department or part of a department.
Those activities must have a person responsible for their accomplishment.
All of these pieces of information are drawn by the WBS (Work Breakdown Structure) and OBS (Organization Breakdown Structure).
The reasons why the WP is the object of this assessment are:
- It makes possible to find the person in charge of its execution.
- The activities falling into a WP have the same measurement unit (other WPs can have other kind of measurement unit and be either not measurable or intangible, thereby one has to resort to some appropriate methods).
- It makes possible the comparison to the estimated quantity, having been object (the WP) of the budget stage.
Finding the percentage we are dealing with is hard in the case the output of the WPs is either intangible or not measurable.
As an example on the intangibility of the output we take the case of the approval by the Management Committee of a specific activity, consisting of the issue of an authorisation.
As an example of the non-measurability is when the engineering dpt works on a brand-new component of the final product and then it cannot take as a reference for the measurement of the advancement of the work the hours spent on it because one doesn’t know the exact amount.
You can do it but it’s unrealistic.
In any case the owner of the WP, the person responsible for it, together with the project manager must choose a criterion as objective as possible, that enables them to face with a potential opposition by the customer on the verification of the work done to the date.
Now these methods will be shown by taking as a focal point the main characteristic of the activities/products of the WPs.
The categories this way classified are:
a) Activities whose duration is shorter than the lapse between two checks.
b) Activities whose duration is short but longer than the lapse between two consecutive checks and the risk of financial and time overrun is small.
c) Homogeneous activities and measurable products.
d) Activities with significant intermediate events (milestones).
e) Activities not subject to any kind of criterion.
f) Irregular activities.
a) Activities whose duration is shorter than the lapse between two checks.
Some WPs consist of activities lasting for a very short period so that the knowledge of the real situation of the state of work isn’t of particular interest.
In this case the most suitable method is the ON/OFF criterion giving the value of zero to the percentage of work done at the very beginning of the activities of the WP ant the 100% percentage only at the end, when the activities are completed.
This method lacks the indications about the progressive completion of work done both in physical terms and in financial ones.
Factors that can be neglected only when the duration of the WP is very short.
Moreover, it could be made use of when the state of work to the date (we call the check date TIME NOW) isn’t tangible like in the example made above of the issue of an authorisation as the output of a WP, even if the duration of the activities is longer that the lapse between two consecutive TIME NOWs.
In fact, even if the achievement of the authorisation is got through various operating steps (request, presentation of the respesctive documents and final letter), each of the intermediate steps when done doesn’t give any indication whether the authorisation will be achieved or not.
b) Activities whose duration is short but longer than the lapse between two consecutive checks and the risk of financial and time overrun is small
When that’s the case the 50% criterion could be the most suitable and easy way , consisting of giving the 50% percentage immediately at the start of the activities and the remaining 100% at their full execution.
In the reality of the businesses it is applied when the duration of the WP is up to three times as long as the lapse between two consecutive checks .
Why is it applied?
It could make sense to apply the 50% criterion when the benefits of the information about the state of work done are lower than its costs, since the collection of the data to measure the progression of the work is particularly hard and expensive.
This consideration is made also with reference to the risk of overrunning the financial budget threshold and time schedule, that has to be small.
Those reasons lead the project owner and/or project manager to get over the lack of indications about the real progressive accomplishment of work done both in physical terms and in financial ones.
In real terms in which cases does this criterion differ from the 0N/OFF one?.
Some WPs are characterized by a great initial inertia so that when the first step is done, one can consider the accomplishment of all the remaining activities as a formal issue.
The examples of the case are several and easy to be encountered in various business fields, so I refer the “effort” to spot some of them to anyone who may be concerned.
c) Homogeneous and measurable products
When the WPs consist of homogeneous activities to be performed that give outputs measurable by the same measurement unit, then, the most suitable method to determine the percentage of the progression of the work done is the FINISHED UNITS one.
It consists of calculating the units worked (worked hours, physical units) and finished accumulated to the TIME NOW and dividing them to the estimated total of the same units at the end of the WP.
The percentage you find is then applied to the total of the value of the full WP and, as a result, you have got the value of the state of work.
The precision of this criterion is certainly very high and it allows to have a precise view of the progression of the project both in a technical sense and in a financial one, by highlighting delays and financial variances in comparison to the original budget.
The way you calculate the financial variances will be one of the future subjects of www.thestrategiccontroller.com
It is very useful for long-term projects where the check on the advancement is crucial to put in place corrections actions when they are possible before it’s too late.
Of course, a suitable organization is needed to get this criterion applied.
What do I mean by that?
In order to have the correct data in a precise and timely way, some features must be there.
First of all, appropriate forms to be filled about the execution of the work must be distributed to the operating personnel.
A culture of real collaboration amongst the departments has to be spread, minimizing, as a result, the risk of the operating personnel underestimating the filling of those forms built by the administration dpt. people.
This phenomenon is rather common in most of the companies working on multiyear projects.
Another aspect to deal with is the collection in a timely way of these forms duly filled.
In facts, when it is hard to set at each point of the works (examples recurring in great structure construction projects) some collection points, the same collection and following transmission of the data could be delayed and as a result the precision of them at the TIME NOWs could be compromised.
Last but not the least, the setting of what is needed to make this criterion perform mustn’t be too expensive.
In other terms a cost-benefit analysis is always necessary.
Here follow a simple example showing how this method works.
Input Data:
Work Package AB1 of the Project Alfa
Units to be delivered: 13
Activities of the WP: 5
Exhibit 1 (State of the Work according to the Finished Units Method)
|
Units delivered |
Percentage to the total |
STATE OF THE WORK (accumulated % after each activity) |
Activity 1 |
3 |
23% |
23% |
Activity 2 |
5 |
38,5% |
61,5% |
Activity 3 |
2 |
15,5% |
77% |
Activity 4 |
3 |
23% |
100% |
Total |
13 |
100% |
|
d) Activities with significant intermediate events (milestones)
Especially in long-term projects we can find over the period of the works some events representative of a certain degree of the achievement of the final goal/output.
Of course, these events must be verifiable.
When these features are present, the most suitable method is the Predetermined Weighted Importance (or Milestones) one.
What does it mean?
Each of the activities of the WP must be examined in depth and be attributed at its completion of a percentage as a measure of its weight with regards to the total weight of the WP (equal to 100).
The final execution of each activity, as written above, is represented by the occurring of a specified event.
Who is the person in charge of firstly (in the planning phase)spotlighting and then, in the execution phase, certifying these events?
The most recommended is the man who has a deep technical knowledge of the subject activities, that is the so-called owner of the WP, responsible for their execution.
The steps suggested to give a specified percentage to the activities a WP is broke down into are the following:
- Spotting all the activities of the WP.
- Choosing which of them are the most important/hard to be accomplished.
- Giving the activities of the point 2 a value between 1 and 100.
- Distributing the remaining points out of 100 to the other activities.
- Examining again the score of each activity compared to those given to the other activities (to achieve scores making sense) and making, if it’s the case, some corrections
- The percentage of each activities is easily determined.
Exhibit 2 (State of the work according the Predetermined Weighted importance/Milestones)
|
Weight |
STATE OF THE WORK (accumulated % after each activity) |
Activity 1 |
15 |
15 |
Activity 2 |
35 |
50 |
Activity 3 |
30 |
80 |
Activity 4 |
20 |
100 |
Total |
100 |
|
The procedure shown above can be seen as a complex one, and for this reason it takes long time, but you have to consider that it’s subjective, related to the knowledge and competencies of one person and, as a result, is debatable.
To reduce this risk, then, it’s better to spend some more time.
e) Activities not subject to any kind of criterion
Sometimes the “usual” criterion made use of cannot be applied because either the technical features are not corresponding to those requested by any of the methods known or the application of the same methods is more expensive than advantageous.
That’s why one refers to the “Estimated percentage criterion”.
The assessment from an external Verifier, not the person in charge of the execution of the activities, appears to be the only solution to determine the percentage of the work gradually accomplished.
Like the criterion explained at the previous point, this is subjective, even at a higher degree.
To reduce the risks of the opposition and disputes with someone, customer first of all, the technical skills together with the experience and the honesty of the verifier are needed to be known
There are two methods the verifier can apply to assess the state of the work to the date:
- The direct method.
- The indirect method.
In the first case, the verifier assesses the quantity of the work done to the Time Now in proportion to the foreseen final output in order to give it a percentage.
In the second case, he subtracts from the foreseen final output the quantity left to be completed.
If you want to judge both the methods, the indirect method appears to be more pessimistic because it tends to overvalue the remaining work and for the same reason it is more incisive in motivating the people to finish all the activities.
On the other side, there is the risk that the value this way achieved by the state of the work to the date is lower than the real one.
Just in the comments on the indirect method you can see even more the limitations of the individual assessment typical of the Estimated percentage criterion.
I remind the application of it must be the ultimate solution.
f) Irregular activities
Often you can find over the whole path of the WP the same activities but with a diverse intensity in terms of difficulty and, as a result, of duration and quantity of resources (worked hours, raw material quantity) employed.
We take the case of great structure engineering projects and construction ones.
For instance, the construction projects could have different points where to carry out the same activities.
It’s natural that these points have diverse characteristics (morphological, geological) from one another requiring more or less time, more or less resource consumption with reference to the same work activities.
It’s clear the “finished units” criterion cannot be applied, because every output unit has a different weight and incurred cost.
Then the criterion fitting it better than all the others is the “output proportional to the input used” method.
It consists of dividing the resources consumed till the Time Now to the total of the same ones foreseen to accomplish the WP. This percentage is applied to the ending estimated output of the WP, achieving as a result the state of the work to the date.
On the financial side, the application of the subject criterion is cheap enough because the collection of the data reveals itself easy and in some cases they can be gathered from consolidated and known processes.
This method allows to have a precise view of the work over all the time horizon of the project/WP but there must recur some conditions.
In fact, if there was a change in the productivity of the inputs, because of a variety of causes, the state of the work assessment could be distorted.
Secondly, for outdoor activities, atmospheric agents need to be in line with the expectations, otherwise they could vary too much the quantity of resources and time spent in comparison to what had been taken into consideration.
Another criterion used in the case of irregular activities is the Cost-to-Cost one.
It considers the resources consumed to the Time Now on the financial side.
You need to divide the costs incurred to the date for performing all the activities to the total costs estimated at their accomplishment.
It’s the most easy to be applied, but I remind that in the calculation of the projects costs you have to consider the costs for the goods ordered but not yet recorded.
I will more clear.
The lapse between the issue of an order for the goods needed for the actvities to be executed and the recording in the accounting books of the respective costs can be great.
If you take into account the costs recorded in the accounting till the Time Now, you won’t have an exact “photo” of your expenses and the state of the work assessment won’t reflect the reality.
Essential condition for the consideration of the goods ordered is their exclusive purchase for the WP, that is the goods ordered to be stored and used in case of necessity must be excluded.
Then, the business needs to set an extra-accountancy system in order to consider in a timely a precise manner all the information related to the project.
19. The choice of an appropriate costing system
The choice of a costing system may look as an independent variable for any of firm called to make it, so we assist to many managers in choosing and implementing it with no consideration of the kind of information needs, their timeliness, the strategy of the firm itself and the kind of industry.
The costing for products, services, projects as well as customers, distribution channels, is crucial for any kind of industry.
It’s important for pricing needs, for make-or-buy analysis, for profitability analysis, for financial statement drawing up.
Make the case of a company using the Target Costing which takes the price from the market and needs to calculate the cost of its products to see if the desired profit is achieved or will be achieved.
Without an accurate costing system every kind of measurement will be wrong, will give the wrong information with the following wrong corrective actions to be undertaken.
The controller, as a result of these considerations, needs to take into consideration three criteria:
- The process of the cost accumulation.
- The overhead allocation method.
- The cost measurement.
As you can understand the management accountant/controller that takes as given any kind of costing system and adapts the “firm” to it without reasoning on each of the listed criteria, is not doing a good work.
Now, we’ll examine the existing costing systems for any of the criteria in a simple and clear way, just to help anyone concerned make that choice.
1. The process of the cost accumulation
There are three methods of costing according to the criterion of the process of the cost accumulation:
a) Job Costing.
b) Process Costing.
c) Operation Costing.
a) Job Costing
When the costs recorded, direct material costs and direct labour ones and overheads, are traced to specific jobs, we are in the presence of Job Costing.
Of course these costs must be easily identifiable and their traceability has to be as objective as possible.
In particular the overheads are traced to a particular job on the basis of a chosen criterion that reflects as best as you can the consumption of the respective resources (indirect materials, indirect labour, utilities, assets) by the job.
What follows is an example of the calculation and traceability of the overheads to a specific job.
Company A manufactures high-end pens and its main cost driver is the amount of labour hours, that is the firm is a labour-intensive one.
Exhibit 1 (Monthly Overheads)
Products |
Working hours (1) |
Total Overheads incurred $ (2) |
Overhead ratio (3=2/1) |
Overheads charged to products (3 x 1) |
Pen A |
1400 |
|
|
43077 |
Pen B |
1200 |
|
|
36023 |
Total |
2600 |
80000 |
30.77 |
80000 |
The objects of the costing process may be also customers, projects, distribution channels and any else thing that absorbs the resources in a specific and identifiable way.
The traceability directly to the objects is explained also by the organization structure of the firm as well as by the strategy.
With reference to the firs point we think of those companies with very few departments and/or when each of these are dedicated to one of the products/services/projects done.
It’s clear there isn’t a strict rule to be followed , this statement springs up from the observation of the business reality.
Now we’ll look at the role played by the strategy.
Also in this aspect the strategy cannot be disregarded. Every choice/decision of business life must be affected by the way the company tries to achieves its objectives.
The company carrying out a differentiation strategy is likely, when its cost are identifiable, to put on a job costing system because it isn’t particularly interested in the search of an efficiency objective of its departments.
It charges its costs to the customers when makes the prices since the customers is less sensitive to it than they are in other industries.
Nonetheless, the search of it, although at a lowest degree compared to the low-cost companies, is always advisable.
If you think of the businesses working on projects (when the personalization is predominant) the job costing is the most natural choice.
An exception to that is when the price of the project is set as a lump sum, that is it is set from the beginning (except for the work variations requested from the customer) and the way for the company to make the highest profit is to reduce its cost as much as possible and the search of the efficiency of its departments is very important.
b) Process costing
The opposite of Job costing is the Process costing applied by companies producing similar and homogeneous products in a high volume.
Examples of the firms using this method are those operating in industries such as mining, paper, some kind of electronics, chemicals.
How does Process costing work?
All kinds of cost are accumulated by each department and then the total is divided by the equivalent units of the production quantity.
The costs of all the departments are then totalized to achieve the total manufacturing cost.
The process costing is not typical only of the manufacturing companies but it can be used by service ones providing repetitive and homogeneous output.
The strategy of low cost and low price is the most appropriate because of the high volume of the products and it’s pursued often by using high-capacity machines and technologies, identifying and eliminating bottlenecks (points in the process where too much work is accumulated , slowing down as a result the manufacturing cycle).
Nonetheless the differentiation strategy is also pursued in this costing method, depending on the specific competitive environment, through the improvement of the quality of the process.
An important issue when you adopt the process costing is the way you calculate the equivalent units.
It isn’t the goal of this article and that’s why I’ll limit myself to the mere concept.
At the end of a reporting period you’ll have some finished products and some partially finished ones, that is Work in process.
In the process costing the objective is to calculate the cost of the processes performed during a period, the opposite of the job costing where the focus is on the product.
That difference affects the calculation of the Work in process, not being a clear cost object.
So you have to resort to the equivalent units (department by department), that is the number of units that could have been finished by taking into account the quantity of work performed in the period on both complete and partially finished units.
A particular process costing system concerns the joint products, that is those ones derived from a common raw materials undergoing the same processes and that don’t have any difference until the split-off point where the semi-manufactured outputs are separated to follow specific processes, each of them with typical features, that differentiate them.
c) Operation costing
A mix of the two preceding methods is the operation costing consisting of the job costing system for the material costs that are charged directly to the cost objects and of the process costing system on the basis of which the other cost categories are assigned to the departments and allocated to the cost objects according to some criterion.
Operation costing is used when the firms manufacture/provide different products/services starting from a common base of activities.
Examples of industries using this method are electronic equipment, furniture and textiles.
The firms addressing to a large range of market segments use this method and they are able both to pursue the innovation strategy through the high quality materials and containing the costs of the operating activities by the enhancement of the processes in each department.
2. The overhead allocation method
The overheads are costs not directly traceable to the products or other cost objects without using a a chosen allocation base.
That means there isn’t a clear relationship between the amounts of output and the consumption of the resources highlighted as “overheads” and that’s why you must resort to a choice of a driver that expresses as best as you can the amount of overheads absorbed by a product/service/project/customer/department.
The methods used are:
a) Volume-based
b) Activity-based
a) The volume-based approach
It uses the units sold/manufactured by product to allocate the overheads, that is these expenses are assigned to each product in proportion to the respective output.
Other bases used by this approach are the labour hours and/or machine hours (depending on the kind of predominant resource used), that allocate the overheads in proportion of the driver amount recorded per each cost object.
As to the product/service this allocation is done either directly to the product/service (job costing) or by passing first through the internal departments (process costing).
All the volume-based approaches assume that every unit of the chosen driver absorbs the same amount of overheads and that’s the core of the allocation on the basis of the respective consumption of the same driver.
b) The activity-based approach
It starts from the assumption that the activities and their amount aren’t always the same for each product and, since the activities are the causes of the consumption of the resources and their costs, the costs not traceable directly to each product/cost object must be allocated on the basis of a driver (activity by activity) that expresses as best as you can that different absorption.
Nonetheless, in some cases some of the drivers are volume-based for those activities whose “quantity” is linked to the output produced.
This approach leads to a more accurate costing that is of great advantage to the decision-making process in the long term when also the fixed costs must be taken into consideration.
3. Cost measurement
According to this criterion we have three kinds of costing systems that are put in place by the companies on the basis of their need of information.
These needs are the timeliness of the information, the purposes of product costing, cost control and the planning decisions
These three kinds follow:
a) Actual Costing
b) Normal Costing
c) Standard Costing
a) Actual Costing
In this case all the costs (direct labour, direct materials, overheads) are recorded after their incurring, according to the actual consumption and the actual “price”.
This method is used particularly when the decision-making need as well as the business planning one have to be fulfilled on the basis of the real costs.
b) Normal Costing
The direct labour and direct materials are recorded when they are incurred according to the actual consumption and the actual price, while the overheads are recorded on the basis of an estimated Overhead allocation rate multiplied by the actual driver amount (units manufactured/labour hours/machine hours), achieving as a result the Applied Overheads.
The predetermined Overhead rate is calculated by dividing the estimated Overheads for a period by the estimated amount of the driver for the same period.
This approach is used when the availability of the information for every decision in a timely manner is needed.
Furthermore, this method avoids the fluctuations over the time of the full costs of the products/ services following the changes in the monthly output and, as a result, the changes in the overhead rate used in the actual costing.
Of course the difference between the Applied Overheads and the Actual ones (according the actual rate) must be balanced at the end of the reporting period.
c) Standard Costing
All the cost categories are recorded when they are incurred according to their standard “price” and standard quantity and put in comparison to their actual “price” and actual quantity.
This approach determines the calculation of the variances in their distinctions (price variance, usage/efficiency variance) very useful for cost control purposes.
The standard system requires, of course, the real chance of its application in particular when the setting of the standard “price” and “quantity” of the resources is possible (for instance non-customized productions) and it is not expensive.
These costing systems shown above can be combined according the needs of business information and its strategy.
You could have an Operation/Activity-based/Standard method as well as a Process/Volume-base/normal costing one-
This decision is on the top management of the company that should follows the advice from the controller/management accountant whose role is always more important to put in place all the tools to make the firm successful .