Skip to main content
Abhi Yerra
Founder, opsZero
View all authors

KPIs with Control Charts

· 3 min read
Abhi Yerra
Founder, opsZero

W. Edwards Deming popularized the use of Statistical Process Control as a means to improve quality. This method transformed Japanese industry into an industrial power after being completely destroyed after World War II.

opsZero implements Statistical Process Control and we use Control Charts to improve our processes as Quality is a part of our Principles.

We have certain goals with our KPIs (Key Performance Indicators):

  1. Conservative — We want our numbers to reflect reality.
  2. Consistent — All KPI charts should look the same with a 1 sigma control on both the Upper and Lower Limits.
    • Charts should show “up and to the right” for any goal we set up.
    • This makes it easy to see at a glance if we are attaining our goal.
    • To achieve this, we use Control Charts, which track a process over time (average, Upper Control Limit, Lower Control Limit).

Control Chart Example

Control Limits are based on the standard deviation. The aim is to keep the process within its limits and gradually tighten them over time.

  • A 3 sigma upper/lower limit may not be effective.
  • Reducing variance to bring the average within 1 sigma or even 0.5 sigma may be ideal — depending on the process.

Another Control Chart Example

The nice thing about Control Charts is that they can be used for:

  • Revenue goals
  • Process goals
  • Any measurable goal over time

This makes Control Charts a useful visualization across functions.

At opsZero, Control Charts are used for all at-a-glance KPIs to find patterns in how we are moving towards our targets. In addition, we use Pareto Charts to build products that reduce Support and Sales issues (covered in a future post).


Example: Generate Control Charts in Python


Videos

And there is Azure coming from behind…

· 3 min read
Abhi Yerra
Founder, opsZero

The Bay Area startup tech stack is MacBooks, Google Workspace, Slack, iPhones, and either AWS or Google Cloud. The rest of the world seems to be Microsoft Windows, Microsoft Office, Microsoft Teams, Android, and an on-site SharePoint server. AWS has the most to lose as Azure catches up.

My wife recently needed Parallels with Windows installed on her MacBook to use Arcgis. I thought, what the hell, and decided to install Parallels on my own machine because I’ve heard so much about how much better Excel for Windows is than the Mac version. (Yes, I got excited about Excel, so sue me…)

So I did it. And having played with Windows for the first time in a decade and a half I have to say I finally get Microsoft’s strategy after seeing this parallel universe.


Microsoft is playing a long game. But their game is to tie everything, and I mean everything, to Microsoft Azure.

  • GitHub, Office, Excel, VSCode, Windows, the Power Platform — all roads lead to Azure.
  • Excel pulls data from Azure, making it an alternative to tools like Tableau.
  • GitHub Actions use Azure for compute.
  • VSCode is connecting more and more to Azure for easy deployments.
  • Windows has easy corporate deployment options via Active Directory on Azure.

If you are in the Bay Area bubble with the Apple, Google, and AWS tech stack, we may be missing out on one of the significant technological shifts. I am betting the winner, in the long run, will be Microsoft.

Microsoft has a huge distribution advantage. Say what you will about Steve Ballmer, but he built a high-power enterprise sales team at Microsoft. Buying a single unified package from Microsoft will, over time, be cheaper than buying piecemeal software from different vendors.

This is why Slack lost. But everyone in the Bay was scratching their head at why Slack lost — because we were looking at Google as the 800-pound gorilla, not Microsoft, which is now the 1200-pound gorilla.


Long-term trajectory

From a technological standpoint, Azure will consistently be behind AWS. Microsoft is a close follower, not a leader.

  • If you want the newest innovations → AWS will still likely be the primary Cloud provider.
  • If your company is conservative and doesn’t care about newness → Microsoft will be just fine.

There will be deals that give companies Azure + Office + Teams at a bundled rate cheaper than piecemeal competitors. Companies will pay for it.


This is all speculative, of course. Amazon, being one of the most innovative companies of our generation, will hopefully give Microsoft a run for its money.

But at this point, the two Clouds I am betting on for production, compliance-oriented workloads are Azure first, then AWS.

Deploying to Cloudflare Pages using Github Actions

· One min read
Abhi Yerra
Founder, opsZero

Cloudflare provides a great CDN with no egress charges on bandwidth. The best way to use Cloudflare is through Cloudflare Pages.

Using Cloudflare Pages should be pretty straightforward for most frameworks that generate a SPA. However, see the example below for how to use Cloudflare Pages from asset pipelines for Ruby on Rails and Django.


Example: Publish Django Static Files with GitHub Actions

Here is an example of using GitHub Actions to publish Django static files:

- name: Build Static Files
run: |
docker run --env STATIC_ROOT='/static-compiled/' \
--env DATABASE_URL='sqlite:///db.sqlite' \
-v $PWD/static:/app/static \
-v $PWD/static-compiled:/static-compiled \
$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG \
python manage.py collectstatic --noinput

- name: Publish Static Files
uses: cloudflare/wrangler-action@2.0.0
with:
apiToken: ${{ secrets.CF_API_TOKEN }}
accountId: ${{ secrets.CF_ACCOUNT_ID }}
command: pages publish ./static-compiled --project-name=opszero-static --commit-dirty=true

Django Everywhere

· 2 min read
Abhi Yerra
Founder, opsZero

Our goal with Workflows & Automations includes a standardization on Python as our backend language of choice. We wanted to standardize on a common web framework as well.

While Flask and FastAPI are the most popular for APIs, they tend to have a no-batteries-included approach leading to N+1 ways of building software. However, opsZero is building an opinionated stack and this requires standardization, and we have chosen Django for our Web Framework.

Our entire business is already reliant on Django as the primary framework so we have extensive experience using it, and absolutely love the built-in templating and ORM with migrations.

Lastly, with ASGI, a lot of extra features for WebSockets and Events are built into the framework using channels. The built-in functionality along with years of Django experience means we can provide Django expertise quickly.


Pre-Built Django Templates

We want to meet our customers' needs regardless of the Cloud they are using and with the lowest cost possible, so we are making available pre-built templates to use Django in both Serverless as well as Kubernetes environments.

We are releasing three templates:

These three templates allow us to deliver value to you faster.

Elon Musk’s Engineering Principles

· 3 min read
Abhi Yerra
Founder, opsZero

Think what you want of Elon Musk, but he has achieved quite a bit in engineering novel solutions to complex problems. We’ve worked mainly on implementing the same process to great effect in what we do.

The principles are:

  1. Fix dumb requirements. Each requirement has a specific owner.
  2. Remove unnecessary parts
  3. Simplify/Optimize
  4. Speed up cycle time
  5. Automate

You can watch him describe his process here:


Fix dumb requirements

When solving a problem for a customer, the customer may not actually know what they need. So uncover the actual requirement behind the request.

Usually, a problem such as:

The production database is high CPU and clients can’t connect

may actually be a root cause issue:

The production database is being used to replicate data to a data warehouse, which is causing the issue.

This root cause analysis can be gleaned through a five whys analysis.

Second, with these requirements there needs to be a clear owner responsible for the issue. If there is not an owner for something, then that itself is an issue. Ownership of each component means that someone exists to optimize each piece.


Remove unnecessary parts

Systems over time become complex. Pieces are added that don’t need to exist—or they were added, then forgotten about.

Systems should get less complex, not more so.

As we build things, we build to get the task done. This means we may add complexity to the system that didn’t need to exist, but because we are pathfinding our way to the solution, that complexity is needed.

Once we get to the point of the system working as needed, we go back and remove the pieces that are not needed.


Simplify/Optimize

After removing unnecessary parts, there may still be complexity within the current components.

To simplify these components, we need to reduce variability and increase standardization.

For example:

  • The use of multiple if-else blocks to account for variability can increase complexity.
  • Simplification requires subjective decisions on the optimal approach.

It’s best to initially build with some variability, then refine through A/B testing over time toward the optimal solution.


Speed up cycle time

Once an optimal approach is found:

  • Remove variability
  • Standardize the approach for deliverability

This leads to faster outcomes, with fewer branching paths, creating better flow.


Automate

Lastly, automate the processes such that things happen without intervention.


Using Cloudflare D1

· One min read
Abhi Yerra
Founder, opsZero

Cloudflare D1 is a great way to quickly create and work with SQLite databases where a larger PostgreSQL or MySQL don’t make sense. These are some examples to quickly work with D1.

Create the Database and Table

wrangler d1 create data-cloud-vendors
wrangler d1 execute data-cloud-vendors --command='CREATE TABLE Customers (CustomerID INT, CompanyName TEXT, ContactName TEXT, PRIMARY KEY (`CustomerID`));'

Windows Based Crawler

· 2 min read
Abhi Yerra
Founder, opsZero

I like Excel for Windows. The Mac version is a joke compared to what the full-blown Windows version can do with data analysis and data finagling right from the app itself.

A lot of what I have been working on as of late has been trying to get data into Excel stored on OneDrive with data crawled using Playwright. The reason for this is that some of the data is small enough that building a full database isn’t necessary, and is not normalized enough to just use PowerQuery.

To achieve this outcome I have used GitHub Actions to trigger the run. GitHub Actions triggers on a schedule which sends the task to a GitHub Runner that starts a Python script. Since GitHub Actions has access to the root volume on the Mac Mini (don’t worry, the machine is dedicated to just GitHub Actions) I can use xlwings to launch Excel and update the workbook.

Once completed, it just copies the file into OneDrive or Dropbox for me to access elsewhere.

There is absolutely no difference between the hosted runner and the self-hosted runner for this example, other than that it just runs on a self-hosted instance that happens to have Excel on it:

name: Download and Upload
on:
schedule:
- cron: "0 1 * * *"
push:
branches:
- main

jobs:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout@v3
- name: Install Dependencies
run: |
pyenv global 3.11
pip3 install -r ./requirements.txt
- name: Combine
run: |
python ./main.py