Introduction

Programming feels fun again. The reason is Emacs, which I started using last month.

I’ve often tried a new setup, only to return to IntelliJ after a week or two. This time, though, I’ve stuck with Emacs for over a month, and it feels like a surprisingly natural fit.

I’m using Doom Emacs. Right after installation it already feels like a finely crafted environment.

How programming has changed

With advances in AI, programming has been shifting from “writing code” to “giving instructions in natural language”. Because of that, a plain text editor can sometimes feel more compatible than an IDE specialized for one language.

It used to be “programming = searching”: look things up in the browser, write code in the IDE. Editor completion reduced the need to search and was very convenient.

Now AI answers questions and searches for us. I open the browser far less, and my flow is less interrupted. Since CLI‑friendly coding agents became available, a CUI‑centric workflow has become very comfortable. It’s easy to spin up small tools, and even complex commands can be assembled with help from AI.

Seen this way, editors that live in the terminal—Vim and Emacs—actually fit the AI era quite well.

How I use it day to day

Lately I work with Tmux + Emacs. I start a Tmux session per project and run terminal Emacs inside it.

I tried the purist approach of “do everything inside Emacs,” but that was a bit much for me. I use Magit for Git, and for most other tasks I just use regular CLI tools—that balance feels right.

For notes and Org‑mode I use GUI Emacs. In the GUI I can view images and keep notes open in a separate window while coding, which suits organizing and recording.

Why Emacs over Vim

Doom Emacs is built around Evil mode, so it feels almost like Vim. So why not Vim? For me the reason is Org‑mode.

I can keep notes, code, and TODOs all in a single file. It works well as an outliner. After trying countless memo apps and outliners, this was the deciding factor for me.

This post is also written in Org‑mode and published using uniorg.

On top of that, the keybindings and overall feel of Doom are exquisite—I can’t help but feel respect for the effort that went into it.

And Emacs Lisp is fun. In the past, even after reading an intro book I didn’t feel I could use it productively. With AI today, I can customize by writing Emacs Lisp from day one, learning from the AI’s output as study material.

Remembering the joy of programming

I switch between editor and terminal with Tmux while developing. The feel of those operations has a kind of “game‑like” fun.

Thinking back, when I was first absorbed in programming, I used Emacs. Coming back to it after all these years feels like a curious twist of fate.

Modern environments are convenient, but in some ways they can feel constraining. With AI on top, I sometimes wondered, “Where did the programming we loved go?”

Emacs, by contrast, leaves room to customize and tinker—space for creativity. Even as AI keeps taking over tasks, finding the “joy of ingenuity” in Emacs gives me hope.

During development, you may want to run a local server on your Mac and verify behavior from a mobile device.

If you start the server bound to 0.0.0.0, you can reach it from the same LAN using a private IP address (192.168.0.0/16). However, depending on security settings, only localhost may be allowed and private IP access may be blocked.

In that case, establish SSH port forwarding between the iOS device and your Mac so that the iOS device can access the Mac’s local environment using the localhost address directly.

Connection steps

1. Enable “Remote Login” in macOS Sharing

Open System Settings, go to Sharing, and turn on “Remote Login”. This allows connecting to the Mac over SSH.

macOS Sharing settings

2. Install WebSSH (SSH client) on iOS

Install WebSSH from the App Store. With its VPN-Over-SSH feature, other apps like Chrome and Safari can keep using the port‑forwarding setup.

Install WebSSH

3. Configure port forwarding

Launch WebSSH and configure as follows.

  1. Enter the information confirmed in macOS Sharing:
    • Hostname
    • Username
    • Password
    • Private IP address
  2. Port Forwarding settings
    • Example: if your Mac uses localhost:3000, enter 3000:localhost:3000.
  3. Enable VPN-Over-SSH
    • This keeps the SSH tunnel active while using other apps.
WebSSH port‑forwarding settings

4. Verify port forwarding

Once configured, open WebSSH on the iOS device and check that the port‑forwarding settings are applied.

Verify the configuration

Lately I’ve been improving my personal site a little every day, and it’s getting better. Here are some thoughts on having a personal site.

Why a personal site?

You fully control the content and design.

I also post on note, but the narrow text column makes code hard to read.

For tech posts I could use Qiita or Zenn and probably get more views, but those aren’t great for non‑tech posts.

I want a place that holds both tech and non‑tech posts—a site that conveys who I am. A personal site fits that.

How to build one

There are many options: no‑code tools like WordPress, Wix, and STUDIO are convenient for many people.

But hand‑crafting your site is also a good option. With a static site generator it’s quite simple.

Static site generators

Popular ones include Astro, Hugo, and Jekyll.

This site uses Next.js’s static generation.

Templates can get you a decent design quickly, and if you want to customize, you can change anything at the source level.

Static sites are also cost‑effective. This one is on Cloudflare Pages, and aside from the domain (~1,000 JPY/year), it costs essentially nothing to run.

Of course the barrier to entry is higher, and features you’d get via a WordPress plugin may require custom development.

But if you enjoy the act of building, I highly recommend it.

There’s a saying, “make haste slowly,” favored by Augustus more than 2,000 years ago.

The fact it survived to the present suggests people in any era often fail by rushing.

Japanese proverbs like “haste makes waste” carry a similar message, but “make haste slowly” adds that haste itself is also important—not just going slow. That nuance is why I like it.

In development it may be useful, too. Go too fast or too slow and you fail.

“Slow” in programming might mean:

  • Writing readable code
  • Writing unit tests
  • Refactoring
  • Maintaining documentation
  • Introducing CI/CD

“Haste” might mean:

  • Releasing early
  • Getting user feedback early
  • Implementing only what’s necessary
  • Avoiding unnecessary abstraction

Both code quality and release speed matter—that’s “make haste slowly.”

Without some pressure I tend to lean “slow,” but under time pressure I sometimes go too fast and ignore quality.

When short on time, go “slow.” When time allows, “make haste.” I want to keep this balance in mind.

I’ve long felt that one ingredient for making something good is a mindset that truly accepts the significant time it takes.

Knowledge and skill matter, of course. But whether you genuinely accept the long hours may be a bigger factor than we realize.

Years ago I watched the video below and was shocked: the creator draws mazes for half a year and then burns them at the end. It’s a battle of endurance, and I came to think that such endurance is essential for making things.

In the end, making meaningful things often comes down to patience and stamina.

I decided to go freelance even though I didn’t have concrete projects lined up at resignation time. I wasn’t sure it would work out, but I quit thinking, “It’ll work out somehow.”

After that, people I met at conferences and former colleagues introduced me to projects, and I started freelancing smoothly.

Most of my work has come through acquaintances or people I met through work. To freelance, it seems important to first build relationships by spending time inside a company.

• • •

As a freelancer I took a “I’ll do anything” stance and got to touch many kinds of projects, for example:

  • Mobile app development with React Native
  • Mobile app development with Flutter
  • Rails backend development
  • API development in Python
  • Backend APIs in Go
  • Website development with Next.js
  • AWS infrastructure with Terraform
  • Machine learning / AI features

Early on I also did advisory/assessment work, but I find writing code myself more fun and impactful, so I’ve shifted away from purely advisory roles.

Unlike being an employee, nearly all of my working hours as a freelancer are programming, which suits me well.

• • •

While freelancing has gone better than expected, my original goal—personal projects—has often been pushed aside.

I planned to work three days a week on client work and spend four days on my own development, but I sometimes found myself effectively working seven days a week. That’s my big reflection over the past two years.

First, when schedules get tight, I tend to increase my hours out of a sense of service. Second, personal products may or may not make money, and even if they do, that’s in the future—so paid work naturally takes priority.

• • •

My software side projects often stalled and I lacked strong ideas, so recently I’ve focused more on hardware, aiming to build a mini‑robot.

I’ve also been involved in the MIA project since last year, working on a hardware product there too.

Going forward I plan to work as a software engineer who can also do hardware development.

I used to enjoy programming for its own sake and wasn’t very interested in building actual apps or services. That changed after I joined an internal hackathon.

At Yahoo there was an annual Internal Hack Day—an in‑house hackathon where employees formed teams, built whatever they wanted over two days, and presented. I joined every year until I left. I even joked in team meetings that the most important part of my job at Yahoo was “participating in hackathons.”

For me, programming had been either a solo activity or a large‑team endeavor. Hackathons are different: you form a small team and build your own idea, which was a new kind of fun.

On my third try I had great teammates and we built a similar‑image search service called “I want to go to the Uyuni Salt Flats,” which won a prize. It suggested nearby sightseeing spots that looked like famous overseas locations. The system performed better than expected, and the effort was later featured on Yahoo’s tech blog.

After the hackathon we wanted to keep building together, so we applied to SoftBank Group’s in‑house new‑business program “SoftBank InnoVenture”, were seconded there, and took on a new product challenge.

• • •

At InnoVenture we built an exercise service called “BAKOON!”.

BAKOON!

It was a live‑streaming app for exercising with idols, with motion‑capture for visualizing activity and stamps for communication.

BAKOON! used React Native and Firebase; our technical approach is covered in this book:

Before this I hadn’t built a serious mobile app, nor much web frontend. I learned the basics as I went and somehow shipped.

We discussed product direction including engineering, and also designed the KPIs on the marketing and business side. It wasn’t just programming—I learned to think about the entire product. Marketing and business were more data‑analytical than I expected, and I felt I could leverage my math background.

We operated for about a year after launch and worked hard, but ultimately shut the service down. Still, the 0→1 experience changed what I wanted from programming. Rather than programming itself, I enjoyed deciding on an idea and building it into a product.

That gave me a new goal: build my own products. To secure development time, I decided to go independent as a freelancer.

When I joined Yahoo as a new grad, there was a large data‑science division, so I assumed I’d be assigned there based on my interviews. Instead I landed on the search team for Yahoo Shopping. It wasn’t my first choice, but in hindsight working on a real service was valuable.

My main work was adding features to the search API and operating the search engine. The codebase was large and mature, so I started by reading and understanding it.

Some software‑design ideas I’d read about hadn’t clicked before, but looking at a real, complex system made them concrete.

I was also impressed by the performance of in‑house tech compared to popular OSS. For example, the now‑open‑sourced full‑text engine Vespa and the NoSQL store mdbm were very fast. I’d assumed the most popular OSS was best, but the internal tools were often superior.

Teams owned APIs and subsystems, so feature work required cross‑team coordination. These days I usually work in 2–3 person teams with little coordination overhead, but back then I often visited other teams when launching a new feature or sunsetting an API. That kind of organizational collaboration is an important part of being a professional developer.

• • •

Doing both development and operations was another key experience. If you split them, priorities can get tossed back and forth; owning both helps balance trade‑offs.

One memorable recurring event was preparing for the annual sale day: forecasting traffic, adjusting server counts, and running load tests—things you only do at large scale. I don’t do that much now on smaller services, but I suspect it’ll come in handy again someday.

• • •

After joining the company I switched from Emacs/Vim to IntelliJ IDEA. At work I needed to handle many languages—PHP, Perl, Java, Go, Python—so I preferred an IDE with broad support and minimal setup.

Outside work I was into high‑performance systems, learning C++ and efficient programming. I even wrote toy languages and interpreters on the side, hoping to use C++ in a future project. I didn’t use C++ at work then, but now I’m using it heavily for firmware, and that learning has paid off.

I first tried programming in high school around 2008. I had my own desktop computer and installed Ubuntu to tinker with it. Controlling the machine from the shell felt fresh and fun.

Working via the shell makes automation easy, and I thought mastering Linux commands would let me truly use a computer well. My “programming” began with simple shell scripts strung together.

Later I wanted to learn real programming beyond shell scripts. The internet said “learn C first,” so I bought a beginner‑friendly C book titled “Shirouto Kuma‑kun’s C Class” (Japanese).

I “felt like I got it,” but looking back I probably would’ve struggled even with FizzBuzz.

Next I picked up a Windows game‑programming book.

I wrote a simple side‑scrolling action game in C, mostly by copying from the book. It ran, but I can’t say I truly understood it.

Heading into university, I was torn between studying information science or electrical/electronic engineering. I ended up at the University of Electro‑Communications in the Information and Communication Engineering course, where I could learn both.

• • •

In my first year there was a C programming class. I had forgotten much of what I learned in high school, so I started over. There were students who finished assignments incredibly fast—it was eye‑opening. Also, Emacs was required for programming in class, and I became a keen Emacs user.

I wanted to get better and started learning Python. It’s the most popular language now, but back then it wasn’t common in Japan. That rarity actually attracted me.

I remember reading “Minna no Python” as my first Python book and finally being able to write something practical.

During undergrad I didn’t work part‑time or hang out much; I mostly stayed home coding. Living alone, I sometimes wouldn’t see anyone for days and wondered if I was okay with my future.

After getting comfortable with Python, I wanted low‑level knowledge and read books in that area. One that stuck with me was “Hacking: The Art of Exploitation.” Although it’s mainly about security, it finally made C click for me.

I also studied assembly language.

I read many other books, but I wasn’t interested in application development at the time, so not many were practical.

I also didn’t plan to get a programming job—coding was just a hobby—so practicality wasn’t top of mind.

• • •

In my fourth year I joined a Brain‑Computer Interface (BCI) lab and mainly programmed with MATLAB and Octave. For experiments using visually evoked potentials, I wrote programs to present visual stimuli on a PC with PsychoToolbox.

BCI

Python was my strongest language then, so I pushed to build experimental apps in Python. I controlled the EEG device’s DLL via ctypes, and a mistake could blue‑screen the PC—rough times.

Eventually the experiment app settled on MATLAB, and I used Python for analysis and visualization.

Around then, machine learning was gaining attention. I learned with scikit‑learn and, in a seminar, worked through Bishop’s yellow book “Pattern Recognition and Machine Learning.”

Later I studied data analysis with pandas, which I still use now and then.

• • •

In my first year of the master’s program, I started programming as a job through part‑time work and internships—improving search and building ranking models. Large language models didn’t exist yet, so it was fun trying various feature‑engineering ideas to move metrics.

Doing near‑production work made me realize I’d accumulated a decent skill set.

My research was going okay too, and I enjoyed the deep‑dive nature of research, so I considered a PhD. But internships were very close to real‑world development, and I felt I’d enjoy it as a job.

And that’s how I ended up becoming a professional programmer.

Recently I performed a large refactor in a Go project and wrote down the approach.

Originally, the project used a flat layout like this:

- cmd/api/main.go
- user.go
- user_db.go
- user_db_test.go
- user_handler.go
- user_handler_test.go
- server.go

This worked for a while, but after about a year the number of files grew and overall readability deteriorated.

Keeping everything in one package avoids cyclic imports, but we started to hit symbol name collisions that made certain names unusable.

After considering options, we adopted the layout that places the domain package at the repository root.

What this layout means

By “placing the domain package at the repository root,” I mean creating a domain package at the repo root and organizing implementations into subpackages according to their dependencies. The idea is inspired by Ben Johnson’s article “Standard Package Layout”.

Although the article calls it a “Standard Package Layout,” that name can invite bikeshedding. In this post I’ll simply call it the “root‑domain package layout.”

For example, for a backend API using MySQL, the root domain package might look like this:

- cmd/api/main.go
- mysql/
   - user.go
- http/
   - user_handler.go
   - server.go
- user.go

You don’t have to place Go files directly at the Git repository root. If you prefer not to, or if the repository name doesn’t match the domain name, create a directory for the root domain and put the code there.

If the product is named myapp, the domain package becomes myapp.

- cmd/api/main.go
- myapp/
  - mysql/
    - user.go
  - http/
    - user_handler.go
    - server.go
  - user.go

myapp/user.go defines models used across the codebase:

package myapp

type User struct {
    Name string
    Age int
}

type UserService interface {
    CreateUser(uid string) (*User, error)
    GetUser(uid string) (*User, error)
    UpdateUser(uid string) (*User, error)
    DeleteUser(uid string) error
}

In the mysql package, implement the myapp.UserService interface.

package mysql

type UserService struct {
		db *sqlx.DB
}

func (s *UserService) CreateUser(uid string) (*myapp.User, error) {
	// Concrete implementation using MySQL
}

In the http package, depend on the interface from myapp instead of using mysql.UserService directly.

package http

type UserHandler struct {
    userService myapp.UserService
}

For testing, I generated mocks of the root package via gomock under the mocks directory and had subpackages depend on those mocks when they needed services from the root package.

package myapp

//go:generate mockgen -destination=./mocks/user_service_mock.go -package=mocks . UserService
type UserService interface {
    CreateUser(uid string) (*User, error)
    GetUser(uid string) (*User, error)
    UpdateUser(uid string) (*User, error)
    DeleteUser(uid string) error
}

This way, the mysql package focuses solely on MySQL‑related logic, while the http package avoids a hard dependency on MySQL and can write tests focused on the HTTP server behavior.

- cmd/api/main.go
- myapp/
  - mocks/
    - user_service_mock.go
  - mysql/
    - user.go
    - user_test.go
  - http/
    - user_handler.go
    - user_handler_test.go
  - user.go
  - user_test.go

In the actual project, the structure looked like this:

- cmd/api/main.go
- myapp
  - http (HTTP server)
  - grpc (gRPC server)
  - rdb (MySQL)
  - s3 (Amazon S3)
  - dydb (DynamoDB)
  - iot (AWS IoT)
  - ssm (AWS SSM)
  - fcm (Firebase Cloud Messaging)
  - ... (omitted)

In these subpackages we run tests as close to reality as possible using dockertest and LocalStack when applicable.

Why not name the package domain

It’s perfectly fine to create a domain package instead of placing the domain model at the root:

- cmd/main.go
- domain/
  - user.go
  - user_test.go
- http/
  - user_handler.go
  - user_handler_test.go
- mysql/
  - user.go
  - user_test.go

In that case, code would refer to types as domain.XXX:


func main() {
    user := domain.User{}
}

If placed at the root as myapp, it becomes myapp.XXX:


func main() {
    user := myapp.User{}
}

Using the product name as the package makes it more obvious that these types are used across the whole product.

Comparison with other layouts

Some projects reference specific architectures like Clean Architecture and introduce directories named after layers:

- cmd/main.go
- domain/
- infrastructure/
- presenter/
- repository/
- usecase/

Such naming assumes familiarity with the architectural vocabulary and can be opaque to newcomers.

Similar to why I prefer a product name over domain, I find more direct naming makes code easier to navigate.

Grouping subpackages by dependency keeps placement obvious even without special background knowledge:

- cmd/main.go
- mysql/
- http/
- user.go

Takeaways

Refactoring to place the domain package at the repository root made the project easier to understand and to test. It encourages a design that’s approachable without prior context.

Showing 1 - 10 of 26