Over the past few weeks I have posted a couple of blogs on my top three ‘must haves’ when looking at procuring activity based costing software. My top three were:
- the ability to report on not only the beginning and end of your model (the inputs and outputs), but all stages in-between;
- the ability to track multiple cost types through the model; and
- the ability to automate as much of the model allocation and maintenance as possible.
The first two points have been covered in the previous blogs so now it’s time to look at model automation and reducing your maintenance burden.
I mentioned a model in my previous blog – a relatively large model containing around 15,000 general ledger accounts, 3,000 personnel, a few hundred assets, about the same number of activities, around 4,000 products and about 500 services. This model is made up of numerous regions, with both the resources and products spread throughout these locations. As a result, there are activities that are specific to each region.
Things are going well – the model is built and balanced. But wait, it’s time for the first update. It doesn’t matter if your update is monthly, quarterly, or annually, the same issues will arise.
The first step is to import your new GL resources. Anything that is already in the model (i.e. same cost centre and account code) will be allocated as per the original model (assuming the driver isn’t changed). And same with personnel and assets – if they were allocated in the first model, they will remain allocated. But as you can imagine, in a model with over 15,000 GL accounts and 3,000 personnel, new cost centres are created and people come and go (and move internally!). And there are also changes to the 4,000 products – some are removed and new ones are created.
Let’s look at the update process that needs to occur:
- new cost centres/ account codes need to be allocated through to activities (or personnel if they are employee related expenses);
- new activities need to be allocated through to products;
- new products need to be added to the current activity allocations; and
- new products need to be allocated to the appropriate services (of which there are around 500).
Assuming that there is a 5% change in your model between periods. That equates to up to 750 GL accounts, 150 personnel, a few dozen activities, up to 250 products and a few services. That equals a massive amount of work to do to update and balance your model.
Let’s take one simple example using a University model as an example. The products are the Units that students take, where they take it, when they take it (which semester) and how they take it. For example, ‘ABC 101’ at Campus X, Semester 1, and Online. So the actual product is ‘ABC 101 (Online) – Campus X (Sem 1)’. This is a different product to ‘ABC 101 (Online) – Campus Y (Sem 1)’ and ‘ABC 101 (On Campus) – Campus X (Sem 2)’ as each of these products incurs different overhead and direct costs.
The activities in the model are pretty standard – there are numerous corporate / overhead activities, direct student-related activities, and of course, the direct teaching and research activities.
If the update process required users to manually update the model, then for each activity, the user would need to review all the products and create the links between that activity and the applicable products.
For example, the activity ‘Provide Student Career Counselling Services’ at the Melbourne Campus would need to be allocated to all Units taught at Melbourne using the number of students as the cost driver. And the activity ‘Provide Post Graduate Teaching Services’ at Sydney would need to be allocated to all Units taught at Sydney using the number of post-graduate students as the cost driver.
As there could be thousands of Units taught at each campus, the effort required to maintain these links is enormous – not only from a time needed (to complete the task) point of view, but also from a business knowledge point of view – the person doing the updating would need to know which Units to link to each activity.
Multiply the effort required by these two examples into that required for dozens of activities in numerous locations and suddenly the amount of time needed to ‘balance’ the model is massive. It is not uncommon to hear of businesses updating their model quarterly – four weeks to collect the survey data, and six weeks to balance the model, leaving a mere 2 weeks for analysis before the cycle starts again. The ratio should be reversed – two weeks to update and ten weeks to analyse, investigate and improve.
My solution? Insist on software that allows you to auto-create allocation paths instead of manually having to find and link objects to create them.
Using the same example described above, all the user is required to do is create one ‘path’, that is, they allocate the activity to the ‘Teaching’ structure and then select the most appropriate driver (in this case it is the number of students on campus (as opposed to those studying online) by Campus (in this case Melbourne), or ‘EFTSL On Campus by Campus’. The ABC software then does all the allocation work for you:
- by allocating the activity to the structure, all Units held under that structure are automatically selected as potential allocation paths, whether there are 10 Units or 10,000 below the Teaching structure; and
- the software then applies filters based on the selected driver – only those Units that have ‘On Campus EFTSL’ and that are located in ‘Melbourne’.
As a result, out of the 4,000 Units found under ‘Teaching’, only those with On Campus EFTSL taught from Melbourne are actually selected by the software as allocation paths. And the best bit? It takes the user 20 seconds to create the allocation path and select the driver, and the software does all the rest.
But wait…some of you may be asking how did the software select the appropriate Units? The student enrolment data obtained from the university provided a wealth of information other than just what the Unit names were. It also provided the number of students taking each Unit, the type of student (Commonwealth funded, domestic fee paying, international etc), the Unit level (undergraduate, postgraduate etc), the location of the Unit (Melbourne, Sydney, Brisbane etc), the fees associated with the Unit (for both funded and fee paying students), plus a range of other information.
All this meta data is loaded against each Unit when the Unit names are imported into the model, and this data, either in isolation (i.e. Melbourne Units) or combined (Postgraduate EFTSL undertaking Melbourne Units Online) can be automatically targeted and used by drivers with virtually no effort by the user to maintain and manage their model.
And the other bonus? All this meta data is automatically used in the OLAP cubes and can be used to filter or customise your final reports.
So, to summarise my last three blogs…
- don’t be constrained by three modules;
- don’t be restricted to only one cost type; and
- ensure you select software that automates the majority of your model maintenance.
Trust me, these three things will definitely make your model simpler to maintain and easier to report on – things every cost modeller should be seeking!