What can we do that you can’t?

What can we do that you can’t?

Last week we were invited to present our regular solution to a university technical team. Lots of analytical and IT skills in the group and one of the attendees asked a very important question “What can you do that we can’t?”. What a great question and one that we should be answering all the time (yes I’m not a natural sales or marketing person) so to rectify this I thought I would write a quick blog post on the topic. So thank you for asking the question, now for the answer.

Could an analytic/technical team inside a college or university build the types of models we build? Yes, absolutely and it’s already been done. So why use us? The answer is simply that we can save you a significant amount of time, effort and stress by providing two key things:

  • Experience: Over twenty years of large-scale modelling experience and over ten years of Higher Education specific modelling experience across multiple institutions internationally, and
  • Engine: A powerful modelling engine.

Higher Education Modelling Experience

To build the model with an internal team will take a lot more time, particularly if you have never built these specific types of models previously, there are a number of key steps you have to go through.

  • Model scope and boundary (what’s included and what’s excluded)
  • Data Review (What systems do you need and how good is the data in each)
  • Model design (What is the structure of the model and what allocations need to be developed)
  • Model build
  • Report design and build

The scope and boundary could take from a few weeks to a few months.

The data review process could be about the same, but shouldn’t take too long for an analyst since this is their key skill set.

The model design process could take anywhere from six months to eighteen months or longer. This requires meeting with key stakeholders, getting input from a wide variety of sources on appropriate business rules, deciding how to treat certain costs, researching additional data requirements that might not exist in source systems etc. If you have never built these types of model previously, then there is also research and learning time to figure out what the model actually is.

The model build process could take another six to twelve months.

The report design and build process should be pretty quick, again this is the analysts domain.

So the shortest timeframe overall could be around 14 months and the longest could stretch over multiple years.

So what can we do that you can’t?

Since we have built a large number of models over 20+ years for large and small institutions, we know what works and what doesn’t work. We already have a large number of Higher Education specific business rules we can use and we can build the entire model with minimal input from stakeholders initially, this is usually done in the scoping study phase.  We do review the model in detail with key stakeholders, but after the initial first phase of the build process. It is significantly easier for stakeholders to suggest changes when they can see the results of the model, rather than trying to specify what they want upfront.

So our timeframe for an initial model would be:

Scoping Study – 2 weeks

Model and Report design/build process – 5-7 weeks

This represents our Executive Model – it is a simpler starting model. Our regular full model could take around 18 weeks, but we’ve discovered that it’s much better to start with the simpler model and then improve it over time with key stakeholders.

That said, the simplified model will still have:

  • Direct cost and revenue, and
  • All overhead,
  • For all schools and departments,
  • For all Programs,
  • For all Course Instances (when, where and how taught),
  • Including Research, Community Support, Commercial and any Sporting Programs.

After the build, the other major piece of work is updating the model, universities and colleges tend to change on a regular basis (particularly at the moment in the middle of a pandemic), so the level of effort to update the model internally could be just as much as to build the model in the first place.

We have significant experience updating our models as well, and can confidently update the model in only a 3-4 week period.

So the question for Analysts to ask themselves “Is it better to spend our time building and maintaining the models, or is there more value in analyzing and interpreting the results of the model?”

A powerful modelling engine

The tools available to analysts are plentiful and powerful.  However, not a lot primarily dedicated to the types of models we specifically build.  Spreadsheets are the usual go-to solution for analysts, unfortunately the size of the datasets we are talking about exceeds the capacity of most spreadsheets. So the next step is a database or analytical tool. Both allow programming to assist with building these models. But every allocation or business rule will need to be designed from scratch and if there are changes down the track then they will have to be rebuilt.

One of the big issues with these types of models is the distribution of overhead and in particular having to resolve circular references. A classic example is where the IT department support the HR department with IT services and the HR department support the IT department with payroll and other HR support. Can this be resolved? Yes it can, but again this all adds complexity and time to the build of the model.

So what can we do that you can’t?

With our modelling engine (Pilbara Insights), the business rules can all be defined centrally so they are easily managed but more importantly easily changed as things change in the organization. There can easily be well over 100 different business rules in these models.

These rules can be developed so that you don’t have to identify the possibly millions of allocations (one of our biggest models has about 150 million allocation paths) – you can set up a particular rule, make one allocation at the top of the model hierarchy and the engine will go and find all of the paths to build automatically.  A classic example is using “Staff FTE” as as way of distributing cost, rather than finding each object that has a “Staff FTE” number associated with it, the engine will go and look first and make the allocation for you.  A major benefit of this approach is that should you review this with key stakeholders and they don’t want Staff FTE as a driver but prefer “Square Feet” then all you need to do is change the driver ONCE and the model will remove all existing allocations using Staff FTE and seek out all those with “Square Feet” metrics and automatically build all of these allocation paths.

The modelling engine will automatically resolve all circular references (you don’t actually have to do anything here). Unless of course it is a 100% circular reference – All of IT support HR and all of HR support IT – this is a logic issue rather than a technical issue, so needs manual intervention to resolve.

It’s also very easy to add in extra “value items”. Cost is the primary value item used, but revenue is something that is always included in our models as well. One university has actually started adding in Environmental Metrics, so they can compare Economic Costs and Environmental Costs in the one model.

There are actually a large number of model specific capabilities including tagging, rollover drivers (one driver fails, then use another one), online error reporting, layering (driving 100% of costs in two different directions) etc. I won’t go into all of these capabilities in this post, but the point is that we have been building large-scale cost models for over 20 years and we have developed all of these advanced capabilities to make our lives a lot easier!

Summary

So the primary benefits of using us are that we can save you a lot of time, effort and stress because we have lots of experience building a large number of diverse Higher Ed models and that we use a powerful modelling engine. The end-result is a very detailed model developed quickly, freeing you up to analyze and interpret the results rather than spending a huge amount of time designing. building and maintaining the models.

The other major benefit here in Australia (and hopefully soon in the US and Canada) is detailed benchmarking. Because we are maintaining these models we can consolidate and anonymize the dataset and develop detailed benchmarks so you can see how your costs compare to the group’s max, min and mean costs. This is an opt-in option for all of our client institutions and so far everyone has opted in.  This is particularly powerful because it helps to answer the question that we get asked quite a bit “Now that we know how much it does costs to deliver this course/program…how much SHOULD it cost?”. Benchmarking won’t provide the direct answer, but it’s another data point to use for analysis!