Building a courier from scratch: How we left no-code behind

Robin Bilgil

Last summer, we set out to replace the first iteration of our web app, built on a no-code platform. This is the first of a series of posts on how we transitioned our product and operations to our own tech.

The early days of Packfleet

Packfleet started as a tiny courier with a handful of parcels moving from a couple of small businesses to consumers in London. We started with an MVP web app to let early merchants sign up, create and manage their shipments which was built – before we had an engineering team – using Bubble: a no-code platform. Bubble comes with a datastore, UI builder, basic analytics and lots of plug-ins which can be glued together to create surprisingly powerful tools.

Coding without code: Bubbleʼs workflow builder

Each morning our drivers would arrive at our depot around 9am. From Bubble, we’d output a CSV of deliveries and collections which we then plugged into a routing / dispatch tool (Circuit) to assign drivers to routes and track them throughout the day. It’s pretty amazing that this is all possible without writing a single line of code.

As impressive as these solutions are though, they don’t really scale beyond an MVP. We consistently made mistakes during the manual export/import steps which caused problems with deliveries and produced bad customer outcomes. We had tracking number collisions that we had to notice and deal with manually. We had to hack around Bubble’s limitations for the more advanced features our customers wanted like recurring collection dates.

There were also countless other small things that were annoying or tedious, like an empty page that appeared at the end of our PDF shipping labels due to a bug in the plug-in we used for generating them.

At Packfleet we want to build the best courier in existence, and for that we know engineering needs to be our core competency. One of the first decisions we made when I joined Packfleet as an engineer in August 2021 was to build our own backend systems and web app to replace Bubble.

From no-code to code

There are some interesting challenges in replacing a no-code system.

For one, it’s not possible to replace incrementally. We didn’t have a database we could connect to or a frontend-backend divide to use as a starting point. It was all one system in a walled-garden owned by Bubble.

There are some crazy things you can do with Bubble plug-ins, like connect to an external SQL database or an external API. In my experience though, solutions like this just take you further down the rabbit-hole of technical debt.

So we decided to replace all of Bubble in a single transition, stopping at the point where we export a CSV into Circuit, our routing tool. This would be a big project, but it was the only good way forward.

The second challenge is figuring out the behaviour of the system: discovering existing functionality without reading code is hard! If only we could write unit tests on Bubble and check they still passed on our new platform to gain confidence…

Instead, we had to translate what was in our heads into code, unit test the new code as best as we could, and do plenty of careful manual testing with real data.

The old Packfleet web app, as seen in Bubble’s UI builder

Preparing for the transition

The planning we did before starting the project was key. We laid out some important things before starting to build:

1. We set clear (and ambitious) switchover date, during which we'd make no major changes to how our current platform worked

2. We chose our tech stack and set up the supporting infrastructure

3. We used a task tracking tool (Linear) to write out all the units of work before we had a complete system, and divided them up into projects. We used Linear’s Roadmap feature to get a sense of progress within each project over time.

4. We cleared our calendars of any non-critical meetings and went into relentless building mode.

The ambitious date we set ourselves was the first week of October; just over a month to build, test, migrate users and launch!

What we had to build

There was a lot of work to get through! Here’s a sample of the projects we had lined up:

  • The Login / Signup flow, preserving all our existing user data.
  • A merchant dashboard, allowing merchants to book collections, recurring collections and shipments while preserving all their past shipment data.
  • Functionality to generate shipment labels as a PDF. (For this, we run a puppeteer instance on our backend combined with ReactDOM for HTML generation. Perhaps another blog post on this later.)
  • A Shopify integration for our merchants to import orders directly from their shops. (It took us many weeks to get our Shopify app approved, which was a saga on its own. Thankfully we got it over line just in time)
  • A CSV upload function for our merchants to create many shipments at once
  • A shipment tracking page for recipients
  • A QR-code scanner for our drivers to scan packages (collected, out for delivery, delivered etc.)
  • An admin dashboard and CSV export function so we could get all our deliveries and collections into Circuit every morning
  • Lots of other small things to help us operate like analytics (we use Plausible), a Slack integration, error tracking and so on.
Layout out everything we needed to build in our issue tracking tool Linear

Choosing a tech stack

How do you choose a tech stack for a startup? It’s mostly a guessing game. Some questions to ask are: what sort of request volumes do we expect on our APIs? What are the read / write patterns on the database? Which languages do we know well, have a good ecosystem and will help us hire good engineers? How will we develop products as we grow the company?

My most recent engineering experience was at Monzo, a startup bank in the UK. One option could’ve been to copy Monzo’s microservice architecture built on Go, Kubernetes and Cassandra, designed to process thousands of payment transactions per second.

Microservices are extremely popular today but they come with costs and trade-offs. They’re well-suited for dozens of teams deploying code in parallel, with good process for managing ownership of services and a dedicated DevOps / Platform team managing infrastructure. Less so for a small start-up where the bottlenecks are elsewhere.

As a small company with 2 engineers, we valued speed of iteration, low complexity and low overhead above most other things. For our tech stack we’ve gone with a simpler and more familiar approach: a managed Postgres database on Cloud SQL and a horizontally scalable backend + API running on AppEngine.

This architecture will no doubt evolve considerably over time, but it gives us a great starting point today. We’ll try to cover our tech stack in a lot more detail in a future post, though here are some headliners:

Database

We use Postgres as our main database, managed by Google Cloud SQL. Most of our data is highly relational and Postgres is a battle-tested, reliable and fast relational database that we have experience with. Postgres has been scaled to serve millions of users by many others before us. If we do run into scaling problems there are plenty of directions to take, from read-replicas, to a sharding extension like Citus, to a Postgres compatible distributed database like CockroachDB.

It’s also hard to overstate the benefit of being able to run a copy of it on any machine to ensure automated tests and local development preserves identical behaviour to production; something we can’t do by going with a proprietary "infinite scale" solution like Firestore or Cloud Datastore.

Backend

We use TypeScript + NodeJS on our backend. Running JS on the server often gets a bad reputation for its ecosystem and lack of type-safety. The TS ecosystem has advanced by leaps and bounds in the past 5 years though, bringing a rich type system with it.

We take full advantage of type-safety in our architecture by using code generation as much as we can – writing boilerplate is one of the most inefficient uses of time! We’re not at zero boilerplate yet, but the tooling we use gets us close.

With Prisma for database queries, and GraphQL as our API schema, we use Prismaʼs CLI and graphql-codegen to generate all the TS types we need from just two schema files. This gets us type-safety all the way from the database connector to the API layer which has already caught hundreds of bugs and made refactors easier.

Frontend

We use Next.js as a framework on the frontend, powered by React and TypeScript. To interact with the GraphQL API, we use Apollo client which also provides local caching and acts as a state store.

We take advantage of Next’s server side rendering to ensure everyone gets a good experience on their first page load; particularly important for our shipment tracking pages which benefit hugely from one fewer round-trip to the server.

Infrastructure

Our backend runs on Google AppEngine, hooked up to Cloud Build for continuous deployment whenever we merge to the main branch. Using AppEngine has its ups and downs, but it’s allowed us to outsource all the DevOps overhead we’d rather not deal with, with just two engineers in the company. We use Google Cloud Pub/Sub for async processing (e.g. notifications) and Cloud Tasks for cases where we need delayed task execution.

Our frontend runs on Vercel’s serverless platform, as they provide zero-config deployment of Next.js apps.

Building everything

There were no secrets to this part — we just had to get our heads down and build, build, build! Having past experience with the stack and tools we chose helped get up to speed quickly, and knowing the exact functionality we needed to replace made it easy to plan the architecture ahead of time.

After roughly a month of constant work, things were progressing well and we had checked off most of the tasks in Linear.

Migrating user data

Our next challenge was to populate our database with existing users, merchants and shipments.

Passwords

To authenticate and manage users, we use Firebase authentication. Firebase supports importing users with hashed passwords from a CSV file. Bubble also offers a CSV export feature, but crucially not the hashed passwords. Bubble claimed this was for security reasons and refused to give details…

We first deliberated making all our existing customers reset their passwords on migration, but following a short investigation we discovered something that made us question Bubbleʼs “security reasons” of not exporting hashed passwords: we could hook into the login form and access raw passwords!

Saving and downloading raw passwords was out of the question, but it would’ve also been a nuisance if all our customers had to sign up again or reset their passwords when we migrated to our new system. Instead, we used a bcrypt hash plug-in to hash the passwords before saving it to a new column in our users table. To maximise the potential of this solution, we e-mailed all our merchants about the migration and asked them to log in again before our switchover date to avoid having to reset their password.

Overall, we managed to migrate over 95% of our active merchants without the need to go through a reset password flow.

Ingesting shipment data

The rest of the data migration was a bit simpler. We exported all our order, shipment and merchant data to a CSV, put it on a GCS bucket, then wrote an API endpoint on our new backend to ingest that file and populate our database. After a few dry runs in a test environment with real data, and rigorous testing, we had the confidence we needed that we could migrate everything successfully on production.

Telling customers

We decided to switch everyone to the new system after 6pm on a Friday. We sent out an e-mail ahead of that to inform all our users of the switchover date and time. It was the point of no return! We were committed to the migration now.

Working on the evening of launch, tired but excited!

The result and what’s ahead

We launched on Friday October 8th, just a week later than our original plan. Aside from some small teething problems in the first few days, there were no major issues and we started delivering packs on our new system the following day, leaving Bubble behind for good.

The new Packfleet web app. There’s still a long way to go on our mission, but we now have something to build upon

Far from being the end, this is the start of our new platform, and the long journey ahead of us of building the best courier service in the world. In the next blog post, we’ll talk about how we geared up for another daunting project: replacing Circuit with our own routing and driver tracking software.

There is so much yet to do, and it will take years. Expanding our product to offer nation-wide delivery (we’re just in London at the moment), same-day delivery and collections, more integrations for our merchants (Shopify is a great start, but we’d love to be everywhere), writing our own routing engine to remove all limitations of what’s possible, and growing the company by many orders of magnitude; these are just a fraction of the upcoming challenges we face.

Weʼre hiring!

If the challenges that lie ahead of us sound interesting to you, reach out to us! We’re currently full-stack engineers and we’re super excited to welcome new folks to the team and together build the best courier experience for recipients, merchants & drivers.

Check out our job openings here and we’re just an email away at jobs@packfleet.com or reach out to us directly on Twitter if you want to chat too, we’re @rbilgil & @jgarnham.