Case study ⊹ Passion project turned GPU Cloud startup

How my zero-one design direction helped Thunder Compute receive Y Combinator funding.

Role: Product and Service Design

Team: Brian Model, Carl Peterson

Stage: Pre-viable founder-stage

Outcome: The founders quit their Citadel quant and Bain consultant jobs to pursue Thunder full-time (and just secured their second round of seed funding!)

Timeline: 6 months

I hid a T for Thunder in the logo :)

👩🏻‍💻 Contribution highlights

My design work established Thunder’s first coherent product framework and surfaced adoption risks early.

Thunder was founded to solve the problem of GPU scarcity for researchers and startups. Over 6 months, I designed the first service flows and initial wireframes, and shaped the MVP that became the foundation for Thunder’s successful YC pitch:

  1. That developers would rent cheaper GPU time from a distributed network instead of relying on centralized cloud providers.

  2. That individuals with idle GPUs would be willing to contribute them in exchange for compensation.

The founders received $500k initial YC funding to develop Thunder, initially valued at 25 million dollars.

Service Flow Design

Built Thunder’s first end-to-end service flows. These designs surfaced adoption risks such as onboarding friction, user trust and application needs.

Strategic Pivoting

Guided the team towards traditional payment options and abstracted user relationships to centralize Thunder’s ecosystem.

Established MVP

Delivered the first service blueprints and user flows that became the baseline for Thunder’s roadmap and a credible foundation for YC pitching.

🗝 A snapshot of Thunder

If you’d like to skip the nitty gritty…

I’ve curated three simplified mid-fi mockups that tell the story of Thunder from both the user and host perspectives. Since Thunder utilizes three main entry points, my goal was to communicate those to you:

The Thunder Console: a lightweight site where both renters and hosts can manage their usage.

Thunder Host App: an interface to manage host-GPU connectivity.

The GPU renter’s Command-Line-Interface (CLI for short): Where renters run programs that need GPU compute power.

⚡︎ Thunder’s Early Mission

Connecting developers with idle GPUs (worth billions and just sitting in people’s homes) to support AI advancements.

Thunder founders Brian and Carl had one goal in mind: making GPU compute accessible and affordable to all.

GPUs are specialized processors built for parallel computing. They render complex graphics for games and animations, but they also perform the massive calculations that power modern AI. By dramatically accelerating the training and run time of large language models (LLMs) such as ChatGPT, Perplexity and Claude, GPUs have become a critical resource in the AI race of today.

Brian first encountered the problem of GPU scarcity while sharing compute resources at his research lab at Georgia Tech.

While working on GPU-heavy workloads, Brian ran into a problem many researchers face: GPUs were too expensive, unavailable when needed, or locked up by institutional bottlenecks.

At least his team had access to dedicated GPU’s. That’s not the case for everyone in this space.

Anywhere from 10-15 researchers were scheduling 8 GPUs via an excel spreadsheets, and the demand was high. If a program ran off schedule, the whole schedule would be impacted.

💡 The opportunity

Developers are paying for the cost of scarcity.

The need for GPUs is increasing, and so is the line of developers waiting for them.

Cloud providers like such as Amazon (AWS), Google (GCP), and Microsoft (Azure) rent GPUs on a strict 1:1 basis — one GPU tied to one customer for the duration of the reservation. Large corporations such as Open AI can afford this payment model. But because of this exclusivity, providers often run out of available GPUs or set prices extremely high. To guarantee access, customers are forced into long-term reservations (weeks to months).

Individual developers, small teams, and startups, are being forced to pay hundreds of thousands of dollars up front for resource capacity they may only partially use. It’s an unsustainable cost barrier.

We gathered informal feedback from students and researchers about GPU accessibility, payment expectations, and trust barriers.

  • Grad students and PhD researchers in ML/AI loved the idea of “renting a GPU quickly”, but didn’t want surprises in billing or availability.

  • Independent developers and early-stage founders were price-sensitive and hateful of a heavy onboarding experience.

  • Small startups wanted to explore the computing space but unable to afford AWS-scale pricing.

💡 What if...

We could arrange a crowdsourced GPU hosting/borrowing service?

This approach allows users to partake in the reservation model that cloud providers require, but dramatically lowers the effective cost of short bursts of compute. Users can pay only a fraction of today’s prices while still getting the performance they need, and GPU owners with idle machines could earn income by renting out their hardware online.

Thunder looked to Helium’s decentralized wireless network as inspiration for its payment model. Helium enables individuals to contribute their 5G hotspots and be rewarded in a transparent, tokenized way.

Thunder wanted to mirror this system: GPU hosts could be rewarded for contributing compute, while borrowers could pay using a simplified, blockchain-backed mechanism that ensured trust and minimized fraud.

🗨️ Competitor analysis

I took a look at similar competitor offerings

We looked at both centralized cloud services and an existing decentralized GPU share service to understand pricing, general public perception, and ease of use in the industry.

Positioning Breakdown

Through initial analysis, we began to map our initial market positioning. Like Vast, we wanted to achieve a decentralized Hybrid structure. We leveraged crowdsourced GPUs while managing them through a service layer. Vast’s peer-to-peer approach also inspired Thunder, but we aimed to layer in stronger safeguards to build customer feedback centered platform. We initially wanted to position to use crypto, seeing the success of competitors such as Render. At the time, blockchain-based contracts were seen as a way to create transparent, automated payments between strangers. Lastly, we wanted Thunder to feel quick and familiar, and not bogged down by AWS’s enterprise setup or Render’s crypto hurdles.

🗝 Researching trust builders

Designing trust into the service

I found that unlike a centralized cloud service, a peer-led marketplace lacks a built-in layer of trust. A peer-to-peer model raises key risks. Renters might pay but never receive access, hosts might provide resources without being compensated, and bad actors could misuse another person’s GPU.

Without a centralized authority, certain risks were imminent:

Risk 1: Minimal guarantees on delivery or payments

A renter might pay but never actually receive working GPU access. Or, a host might provide GPU time but never get paid.

Risk 2: Identity risk and gaps in accountability

Users can be anonymous, making it easier to disappear after fraud or misuse. And without oversight, bad actors could run malicious workloads (e.g., crypto mining, spam, hacking) on someone else’s hardware.

To embed trust into the system, I removed direct host–user interaction and designed a network-mediated model. This design created a subconscious layer of accountability. The network itself became a centralized source of truth, while preserving the flexibility and affordability of peer-to-peer GPU sharing.

Pooled resources for abstracted access

Hosts contributed GPUs into a shared network. Then, users borrowed GPU time from the network without ever knowing which host’s hardware they were on. By anonymizing the hosts and user relationship, both sides gained protection from fraud, misuse, or targeted exploitation.

Duplicating workloads to maintain quality check

Workloads were distributed across multiple GPUs, ensuring reliable outputs even if one underperformed. This also performed a check against malicious hosts pretending to have a GPU to make crypto.

Trust ratings on host hardware to incentivize quality host performance.

By continually grading host performance, we could learn more about what the ideal host looks like in terms of connectivity, compute speed and other analytics. This would help us refine our system and also help users trust the GPU pool more.

🗒️ Service frameworks

I drew up Thunder’s initial service flows.

Based on the requirements, I created an initial service model that the team still mostly uses today. Shown below are several important scenarios:

  • Thunder’s overall flow

  • Host listing a GPU and receiving pay

  • User running a program

Thunder Service Flow

Thunder’s service connects GPU users with hosts through a credit-based system. Users purchase credits in the Thunder Console, which are logged in Firebase for speed and finalized on the Blockchain for trust. Compute jobs are submitted through the Thunder Network, executed by hosts, and returned to users. Hosts are paid out in credits after their jobs complete, ensuring both affordability for users and reliability for providers.

Host’s Service Flow

Hosts register, connect, or disconnect GPUs through the Thunder Host App, which communicates their capacity to the Thunder Compute Network. Jobs are assigned, run on the GPU, and returned through the Host App. Job completions are recorded via Firebase and finalized on-chain, with credits exported to the Thunder Console. Hosts receive payouts in their Thunder Wallet and can redeem them at any time. This flow gives hosts full control over participation while ensuring trusted, verifiable rewards.

User borrows a GPU

When a user starts a program from their CLI, the request is passed through their application into Thunder’s Network. The network checks if the user has sufficient credits. If additional funds are needed, the user can top up their crypto wallet, which updates back into the system. Once funds are confirmed, the job is assigned to an available host GPU, executed, and the results are returned through the network, back to the user’s program, and finally displayed in the CLI. This ensures jobs only run against verified balances while giving users a seamless experience of submitting compute tasks and receiving results.

🗨️ Reactions and reflections

Without convenient payment methods, users were reluctant to try Thunder

Helium’s crypto model shows how individuals can passively contribute networks in exchange for crypto. Thunder envisioned something similar, with hosts donating GPU compute and earning tokens. But for those renting GPU’s, consistency and ease of use often outweighed the cost. Many chose more expensive providers simply because they offered straightforward payments and reliable compute access. This highlighted a key adoption barrier:

Requiring users to set up a crypto wallet created friction that overshadowed Thunder’s extremely competitive pricing.

My early user flows surfaced this risk early, and the team was able to pivot away from crypto as the sole payment method early on.

༄ Winding down

Reflections

Thunder taught me that early-stage startup design isn’t just about making wireframes or diagrams. It’s also about shaping trust, strategy, and credibility at the very beginning of a company’s story.

Thunder was just an idea when I joined, so my work became the team’s first coherent framework. Through my service design and research, I helped the founders refine their vision and set them up for success.

Looking ahead, I also explored future product avenues with the founders to strengthen investor pitches and open doors for growth:

  • Optimizing data centers for higher GPU efficiency.

  • Serving corporate clients with more predictable, scalable and cheaper compute than typical cloud providers.

  • Helping individual hosts optimize their own GPU performance.

  • Supporting private rentals to specific users or teams to optimize their own resources.

  • Expanding to other niches such as rendering workloads, to attract customers such as Render Network’s.

Although Thunder’s services have shifted as they focus on building their core infrastructure and respond to customer feedback, the baseline flows I designed have mostly remained intact. As the company matured, my work was handed off to an external design team.

Through this work, I learned that the real value of design at the founder stage lies in instilling confidence in both users and investors. My early designs gave Thunder a foundation to scale upon.