Case Study – Retail Consultancy

The Situation

The client was a  retail consultancy advising shopping centres, local authorities and retailers on the best decisions to make for their sites, or acquisitions.

The reports that the company created on behalf of their clients made extensive use of demographic data, and there were a total of thirteen separate Access databases holding key datasets.  These databases were re-created every time a new version of the dataset became available, and the data analysts were also in the habit of making copies of them for their own purposes.

It can readily be seen how this could rapidly become confusing and error-prone, and the client had assessed that up to 60% of the analysts’ time was spent on data assembly processes, as opposed to report compilation and authoring, which was what they were actually being paid to do.

Our Approach

The first task was to secure the data, so we loaded it all into SQL Server, using Access as the ‘front-end’.  This gave a far more robust solution in terms of the data storage, as well as ensuring that there was only one copy of the data.

Although users could still make local copies for their own use of they wanted, there would always be one single point of reference.

SQL Server also provides far better performance for the ‘number-crunching’ elements, and we also took the opportunity to simplify the process of versioning the different datasets, so that it was possible to select which dataset was used for a report.

The project aim was to simplify the data assembly as much as possible, and to generate the data required for reports directly into Excel spreadsheets, where the graphs and so on could be created.

The Solution

This was a complex application, so a detailed outline of what it does is beyond the scope of a case study, but a couple of screenshots will show the sort of system that we built.

The Outcome

A significant time-saving was realised – the process of generating the spreadsheet that drives the graphing for each report now takes ten to fifteen minutes.

It is far more difficult to make an error, such as selecting the wrong dataset or year.

The core datasets are protected, and cannot be corrupted or damaged by the users.

Users are able to take extracts of any dataset for their own use, so their freedom and flexibility is unaffected – this was an important part of getting their buy-in to the project!