The team at Ignite recently completed a B2B self-serve customer portal for their client, ZEN Energy. Ignite’s Head of Operations, Kris Hunter and Mitch de Zylva, Senior Data Scientist/Engineer, discuss their perspective on building self-serve options.
Why bother with self-serve?
At its highest level, self-serve empowers customers, gives them freedom to interrogate their own data and frees up the time of the provider’s customer care team.
“There was a lot of business time taken up with answering ad-hoc customer requests for data.” says Mitch. “By giving the customers access to their own information, the ZEN Energy team was able to free up time to focus on creating deeper value for their customers, improving processes and outcomes, rather than serving information that the customer wanted to access themselves.”
Self-serve increases risk
Self-serve increases the risk to any business, with more people accessing more information and less control over specific processes, or manual intervention to restrict what data is sent out.
“The key is to predefine and interrogate the data that our customer is happy to share. We take a lot of care in establishing the right protocols to protect that data. For ZEN Energy, we put in place a sophisticated serving layer. This separates what the end user can ask for and the actual data,” says Mitch.
“It is all about good architecture,” adds Kris. “The goal is to build something that can be accessed by one or 1,000 customers all at once, without compromising speed, accuracy, or security. Everything has to be scalable, particularly with a fast-growing customer like ZEN Energy.”
The team spent a significant amount of time defining the data set, ensuring that they put in place the right restrictions around access to protect data privacy and integrity.
What did you wish you knew before you started?
The data asset that your organization stores and tracks should be a model that is aligned to your business processes, under your control, and understandable by all members of your organization.
“The key thing to understand is the process that the data goes through, the steps in the process, what is manual and what is automated,” adds Mitch. “The front-end part of processing data may have constraints including manual steps. These are key to understand. Where, when, and how does human intervention occur in the process.”
“The idea is to remove all manual steps and have human reviewers rather than agents. The easiest point of failure is always the manual process part.”
With this project, initially, the team had not fully understood which parts of the front-end process were manual. This then had the potential to cause breakdowns in some of the data flow. Understanding the full process is key to the ability to create a robust architecture.
“What really had us was the billing system migration that ZEN Energy completed right in the middle of the project. We had designed the portal for one platform, then we had to re-pipe it for another. This was also a great way to test the robustness of our architecture. There is no better test than having to deconstruct and reconstruct all your incoming sources of data,” adds Kris.
Ignite had to make a call on access vs analysis. Data platforms are typically strong in one or the other, but not both. When looking to store a lot of data – the question is ‘Do you have analytics requirements or access requirements?’
If you have analytics needs, then the data should be modelled in a way that allows users to understand the relationships and support different investigations and querying of that data. SQL is a common and familiar approach for this. If the focus is more on a pre-defined serving of a particular structure, needing to support highly scalable and concurrent access, then we find NoSQL is a better capability.
The thinking around the data modelling for ZEN Energy was to provide access to a specific, pre-defined set of information, with the focus all on fast access and the ability to support future customer scale.
Access has to be simple and intuitive. The customer can get their information now, very quickly. The serving platform can pull back 1.8M rows of data within 200 milliseconds. That is phenomenal. Something that isn’t the main focus for a more internal-facing SQL-based solution aimed to support analysts and reporting.
Who benefits from the solution?
- The end users (ZEN Energy’s customers) get the ability to perform their own analytics – reconcile their bills on their end – do whatever they want with their data.
- ZEN Energy benefits from its customer team engaging with customers in a service relationship, rather than servicing data requests.
- ZEN Energy now has a scalable solution for when they add more customers or want to provide access to more data. The only limitation is compute cost (in a pay-for-use cloud model) and the front-end servicing mechanism.
- The system is designed with expansion in mind. Storing all that data within the NoSQL environment is cheaper than storing it in an analytics environment.
- The data set is stored behind firewalls and multiple layers of secrets which need to be passed between different layers. This prevents people from accessing data without the right set of credentials.
- These credentials lock what the individual can see. They can’t see everything without having a series of specific identifiers (NMIs) assigned to them. It is not simply a username or password that gives you access to data.
Tech Solutions
The Ignite team used Cosmos DB – a NoSQL tool. This architecture will also allow us to support potential CDR requirements moving forwards.
You can’t cut corners when it comes to data security and ZEN Energy certainly did not for this platform.