Use this banner to inform your users of something important.

5-step approach to save 10+ hours per week in your data teams (Extra: example data stack)

January 19, 2023
7 min. read
Share this post
5-step approach to save 10+ hours per week in your data teams (Extra: example data stack)

Introduction

As the volume of data in companies continues to grow, data teams are under more pressure than ever to extract, transform and analyze that data to drive business decisions. However, 30+% of data teams’ capacity is spent on non-value adding activities - creating the 158th dashboard that will be forgotten in 1 week or modifying simple SQL queries - that results in missed opportunities to extract value from the available data. This is nobody’s fault - a better approach & tool stack is needed to address this. In this article, we will outline a 5-step approach to save data teams 10+ hours per week and share the data stack we put in place at Flawless that helps us achieve this.

The problem

Data teams often spend as much as 50% of their time on non-value adding activities, such as creating dashboards that no one will use, modifying simple SQL queries based on unclear business requests, or chasing business teams to understand their needs for new reports. This is a costly waste of resources given how much impact data teams can have if focused on the right things (e.g. extracting value from the available data). Data teams should spend the majority of their resources on making data accessible for downstream use (i.e. extracting, transforming, cleaning data) and enhancing / building data products. 

We discussed some of the root causes of these issues in a previous article - a brief snapshot below:

  • Unclear business requirements / lack of business context when requests go from business to data teams
  • Ops teams are not able to navigate the data and / or write SQL to extract what they need (which in turn makes data teams a bottleneck)
  • Unclarity over what dashboards / reports / alerts are used, making data teams unnecessarily maintain ones that no one uses

This results in data teams never catching up with their tasks and satisfying business needs, and at the same time, never having capacity for important projects.

5 steps to save 10+ hours per week

1. Understand key KPIs and operational events

The first step is to sit down with business stakeholders and understand their key performance indicators (KPIs) and which operational events drive them. This will give data teams a clear understanding of what are the business needs from their data and where to focus their efforts. We’ve found that 2 simple questions help uncover a lot of insights: 

(a) What are the main KPIs or OKRs you’re currently optimising for ?

(b) What are the main incidents / events that happen in your day-to-day operations that affect those KPIs?

2. Create ready-to-use data views

Once the key KPIs and operational events have been identified, data teams can create the first 3-4 data views related to those KPIs in a ready-to-use, clean format. Note: this should be multi-use output like clean data that business teams could use in multiple ways / tools (vs. single use outputs like dashboards). This will make it easy for business teams to self-serve and reduce their dependence on data teams. 

3. Empower business teams

Next, data teams can empower business teams to create their own dashboards, reports, or alerts by connecting these data views to self-service tools. This will free up data teams from repetitive support tasks and give business teams more control over their data.

4. Set up a request process for ongoing needs

Data teams should set up a process in which business teams can request new data views with predefined inputs. This will ensure that all requests are clear and actionable, and will make it easier for data teams to prioritize and complete them. As this is still a queue (which can get unwieldy again), the key is to structure the requests as clearly as possible and aim for multi-use business data as outputs (vs. a dashboard / custom alert).

5. Focus on data transformation

Finally, data teams should focus on importing, transforming, and cleaning data instead of repetitive support requests. By automating as much of this process as possible, data teams can save time and resources, and create new data views as needed to support business decisions.

To complement the above, a scalable and flexible data stack is needed. 

Process and data stack we use at Flawless to enable above

At Flawless, we didn’t have any data teams nor ready to use data to set this up. So we had to develop a system to supercharge the above. Below is a quick snapshot of how it works: 

Simplied overview of data stack at Flawless

1. Collect & extract data

Tools like Fivetran or Airbyte can be used to extract data from various sources, such as databases, SaaS applications, and APIs. These tools automate the process of data extraction, which can save data teams a significant amount of time and resources. In our case, we had to set up a reliable pipeline to get Product usage (from our Production environment) and Sales & Marketing data into a BI database. We went with Fivetran as it had all the connectors we needed and was relatively cheap. 

2. Store data

Data needs to be stored in a data warehouse (e.g. BigQuery / Redshift / Postgres) to make it easily accessible for business teams. These data warehouses are optimized for various actions (storing, querying large amounts of data) and can handle the concurrency and performance required by data teams. We chose Postgres as it was the fastest and most cost effective for our needs. 

3. Transform data

Once the data is stored, it needs to be transformed (e.g. aggregated, cleaned, standardized) to make it usable for business teams. We use dbt to instrument these transformations through SQL and push them back into our BI warehouse. We push the raw data into Postgres and then transform it and publish it for business use. 

4. Activate

Once cleaned data is available, business users can decide what is the best way to activate it. Enabling the business teams to activate their data can be very effective as they know best what will drive most impact for the business. There are multiple use cases here, and what you choose will vary depending on your vertical and needs. We see 2 main use cases that we believe will be prevalent across industries.

(a) Data visualization - for which there are a plethora of tools out there. We implemented Metabase, largely because we also use their embedding capabilities to offer our clients dashboarding and analytics based on their Flawless usage. 

(b) Monitoring / alerting tools - that ensure that business teams are aware of critical events and are able to respond quickly. This is a new field for business use, most companies rely on data visualization to pick up relevant events, which is time consuming and error-prone. Business monitoring tools allow business teams to create their own alerts and reports, and also to request specific data views in a structured format. In case you had any doubts, we use our own Flawless for this, for obvious reasons. 

By using these tools, data teams can automate many of the repetitive and time-consuming tasks associated with data management and can focus on extracting value from the data. Note: research into what tooling works best for your company is key, as requirements will vary based on the size and scale of the company.

Conclusion

By following this 5-step approach and using the recommended tool stack, data teams can save 10+ hours per week and focus on extracting value from the data. This not only results in cost savings but also increased business value. It is important to note that it's always good to do research on the tools that are best suited for your business, as it may vary based on the size and scale of the company.

If you are interested in discussing this further, feel free to contact us at Flawless

Share this post

Related posts

Operational business intelligence: Improving BI to meet the needs of operations teams
Design
7 min. read

Operational business intelligence: Improving BI to meet the needs of operations teams

In today's digital age, businesses are generating a huge amount of operational data, making data an essential element in operations. However, most operations teams still face challenges in getting the most value out of their data, which can hinder their ability to achieve operational efficiency and higher margins. In this article, we'll explore ways to overcome these challenges and implement true operational business intelligence.
Read post
Using AI to boost efficiency and creativity in business operations
Design
3 min. read

Using AI to boost efficiency and creativity in business operations

ChatGPT is an invaluable tool for business operations teams looking to boost their creativity and efficiency. With its advanced understanding of SQL and vast knowledge base it can act as an effective brainstorming partner for business teams. The key is to start using it and experiment with different inputs, in order to understand how to make the most out of it.
Read post
How to unlock hidden value by quick resolution of operational incidents
Design
5 min. read

How to unlock hidden value by quick resolution of operational incidents

When it comes to resolving operational incidents, quick reaction time is key. An organization's reputation, financial stability, and even customer satisfaction can be severely impacted by slow response times.
Read post

Ready to try Flawless?

Book an onboarding call & start your free trial in 5 minutes.