Hey, I’m Pranav Deepak. Blue Morphism is a blog/digital garden about mathematics and code — the things I enjoy most. I’m still learning (I can write some Python, C, a bit of CUDA C++, and PTX), so expect mistakes and rough edges. Sometimes I’ll post longer projects, sometimes small, self-contained notes. If you spot mistakes or just want to talk, you can email bluemorphism@gmail.com, join the Blue Morphism Discord, or reach out on Twitter.

Over the past few years I’ve been into math, deep learning, reinforcement learning, and more. Lately I’ve fallen in love with GPU kernels and found what feels like a “north star” problem:

Can mathematics give us a universal way to describe the minimal GPU instruction set needed for a given operation, in a form that still leaves a tractable search space for faster kernels?

PTX has hundreds of instructions, but a fast matmul really just uses a handful load/store and tensor core ops. Modeling that seems far more approachable than modeling the entire ISA.

I’ve also collected a lot of math and code during this time. Eventually I want to polish and publish it here, but don’t expect that too soon.

About Blue Morphism (and me)

Most of my work happens on paper or in private repos I use like notebooks. I regret not sharing more in public, and I want to change that. Over time I’ll clean up repos and publish them. The chaos folder here is a messy dump of what didn’t get stuck in notebooks or private code. My plan is to gradually work through it all, polish what’s useful, and put it out here.

For now though, most of my focus is on the “north star” problem: Bitter Lesson for faster kernels?. I gave a talk about it at Hacktron AI x Secure Sips – Breaking & Building AI Meetup and met a bunch of absolutely cracked people there, and made some really good friends.

Trading coverage for maturity

The way I do math looks something like this:

That means I trade breadth for a bit more depth and maturity. I’m okay with that. The “aha” moments are what I enjoy.