hotdog shooting through outer space like an asteroid

How I built HBO Silicon Valley's SeeFood app with no code 🌭🌭

My goal

As the Director of Product at Landing AI, one of my goals is to continuously look for ways to make computer vision more accessible. While I've worked with many software and ML engineers in my career, I have never personally written a line of code. So I decided to put the product we have been building to the test.

Tech stack

My no code stack consisted of:

  • Kaggle to source my food classification images 
  • Bubble.io to build my frontend
  • Open AI and Stability AI to create some fun graphics
  • LandingLens to classify, train and deploy my model 

The process

In just a day or two, between meetings, I've cooked up an app that recognizes and classifies food. That's right, hot dogs and not-hot-dogs alike!
I write in detail the steps I went through to ship this legendary app. Please excuse the Silicon Valley references 😃

01/

The process

Scouring Kaggle for the Perfect Weiner Dataset

Imagine a hot dog. Or not. I rummaged through Kaggle's vast data pantry and emerged with a perfect set of hot dog snapshots. After a quick search, I had truly found the Holy Grail of frankfurter data. The dataset I found had over 2000 pre-classified images of hot dogs (and 2000+ of not), a bit overkill for LandingLens' data centric technology but great nonetheless.

If you don't know Kaggle yet, it is great source to kick off your machine learning projects. It provides datasets and even hosts competitions around ML.

hot dog detection app screenshot

02/

The process

LandingLens Eats Up Images Like a Kid at a Hot Dog Stand

Next up, LandingLens. Easier than explaining Pied Piper's compression algorithm - I just set up a classification project, tossed in the Kaggle hot dog data, and hit 'Train'. Since this was a larger dataset, it took LandingLens about 1 hour to conjure up a model with >98% precision and recall. A few more clicks, an endpoint spun up, and voila - I was armed with a hot dog model that would give Jian Yang's model a run for its money.

landinglens hot dog app screenshot

03/

The process

LandingLens and Bubble.io Become Besties

Next, LandingLens + Bubble.io connection. I just plugged in the LandingLens key/secret, crafted a POST request (all provided within the LandingLens app) and pointed it at my 🌭 endpoint.
I also created a workflow in Bubble.io that runs the POST request each time an image is uploaded.

04/

The process

UI magic

Admittedly, the UI took me the longest time because I am a product manager and couldn't create the UI I envisioned.

I used OpenAI's DALL·E 2 and Stable AI's DreamStudio to find the most appetizing UI toppings for the app. Let's just say I burned my free credits faster than Russ Hanneman's 'three comma' status. Check some of the ones that didn't make the cut below. You can probably guess what my prompt consisted of..... ⭐🌟🪐🌌🍔🌮🌭

hot dog detection app screenshot

UI magic

I'm honestly quite surprised I was able to pull it off with relative ease. Andrew Ng liked it enough that we had hot dogs for happy hour the next day 🤣🤣
Go check it out at https://see.hotdogs.live.

jin yang from silicon valley demoing hot dog app

LET'S WORK TOGETHER

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.