A few weeks ago there was a thread about using AI to finish abandoned projects, and a comment from avereveard https://news.ycombinator.com/item?id=47905088 on building their 100x100 grid battleship got my spark. I didn’t build that, but it pushed me to finally finish a combat idea I’ve been wanting to play for ages.

Introducing Naval Strike!

It’s a simultaneous turn-based fleet vs fleet on a grid played in the browser. There’s no account or signups needed. Both players plan their moves, then the turn resolves at once.

There’s three different play styles:

* Solo. Procedural maps and uses guided “AI” opponents (in the same way that 1990s games had “AI” opponents)

* Scenarios. Objective missions on procedural maps like "rescue the downed pilot" or "destroy the convoy"

* Campaigns. Historical battles on real-world maps (Sink the Bismark in the Atlantic, or escort the tankers through the strait of hormuz)

A few things I figured HN would ask:

I designed the architecture and Claude did implementation. While 90% of the decisions are mine 90% of the lines are AI-written. Lots of micro managing short bursts to get it to look/feel right. I think I consumed a week’s worth of tokens just to get the fog working how I wanted. Eventually I learned that having lots of small bursts of code with testing got me to where I wanted much faster than longer sessions. I built preview pages so I could test animations, sequencing etc without affecting the codebase so I didn’t chew through my tokens.

Stack. TypeScript/Canvas 2D, hosted on Cloudflare. The server is a tiny WebSocket relay, it pairs two players by room code and blindly forwards messages, with no game logic. So there’s no accounts and no tracking beyond default Cloudflare insights. Just open the URL and play. Opponent AI is guided scripted tactics with situational decision trees.

Art is a mix of AI/hand. I’m not artistic enough to do everything by hand, but there’s a lot of manual pixel by pixel editing on the assets. Assets are stored as JSON arrays and drawn directly to screen with a colour palette.

I ended up building a small map editor that lets me "trace" Google Maps screenshots to get the campaign maps geographically close to the real engagements. Sounds are from open source libraries - you can mute them with the little speaker button but they are on by default which might upset a few people.

Although this was largely coded by AI, I got heaps of enjoyment being able to focus on the UX, style, gameplay and UI to be just how I wanted. Being able to test (and throw away many!) ideas so quickly was awesome fun.

Feedback welcome. Especially on balance and the campaign design and gameplay, it’s hard to play-test every variable!