The recent hype that the world of AI has been living is definitely interesting to watch and analyze. First of all, we’re calling it AI but today 90% of the conversation is really about a sub-sub-field of it: large language models. These surprisingly good technologies — like GPT from OpenAI, PaLM from Google, Llama from Meta, and so on — are basically changing the world as we know it, so it’s fair to be hyped about them, but there might be a subtle problem.

I didn’t want to use this word, but when a technology that’s this disruptive comes out, everything starts to revolve around it.

If you’re even remotely active in some sort of online tech community, it’s likely that you’ve seen how a good number of the indie (or not-so-indie) projects is just about building something around LLMs.

While it’s obvious that the applications are countless, with some potentially being even more disruptive (sorry..) than the original product, it’s also noticeable how innovation is being a victim in this scenario. If you’re starting out building a tool today, it’s pretty much impossible not to think about ways to integrate these models into it. This results in two things:

  1. a proliferation of unnecessary tools
  2. a deceleration in the development of useful stuff that doesn’t require LLMs

Be honest: how many GPT wrappers have you seen so far?

As interesting as some third-party applications are and will be, the main risk though remains that OpenAI and the companies behind the models/products are: a) already shipping crazy features and additions by the hour, and b) probably working on even better ones as we chat.

So, is there a solution? Should we just stop thinking about them? Of course not. But there are more interesting ways of integrating them into our services: if you take a look at Visual Studio Code Copilot extension, you easily appreciate the modularity with which it’s injected in the editor as an extension, not as the center of everything. A lot of popular tools are already working on extremely focused and specific integrations (I also like Supabase AI SQL editor, for example), and that’s amazing for several reasons.

First, they can be fine-tuned on domain-specific knowledge, potentially providing more accurate value than generalist assistants; second, the essence of the original product doesn’t get lost in the shadow of the new big AI feature — that becomes the product itself.

There are still apparently-less-cool areas with lots of gaps to fill, and spending all our energies on large language models might be a huge mistake.

For example, I’m building a project to organize personal health record data. While it might seem tempting to add in LLM features — and it would probably also add some benefits — that’s not my main focus right now because it wouldn’t solve the main problem I’m trying to tackle: a smoother personal health data management experience.

Now think: how many skilled people are channeling their energies into messing around with OpenAI API instead of solving problems that would probably require a completely different family of solutions?

Of course, everyone decides what to work on, but that’s still something I would think about.

Another aspect to analyze is the potentially negative effect that the widespread use of LLMs can have on socially relevant problems in the AI world, like fairness. Researchers know this: it’s difficult to make a model both fair and accurate. Think about which one of these two objectives this hype wave will naturally push for… Well, yes, people and companies will want their models to be as accurate as possible, giving the best output that’s obtainable, even at the cost of fairness.

But hey, fairness research is also shifting a lot toward the LLM direction. A lot of the AI research is shifting in the LLM direction. Actually, a lot of research in general is shifting toward the LLM direction.

Maybe it’s the case that too much research is focusing on large language models? In my opinion, there’s the serious and tangible risk that a lot of money that could’ve gone elsewhere is going toward studying, experimenting, and creating better, faster, larger language models.

It’s also true that this is the prime period for this technology, and later the trend will shift toward something else. But in the meantime, several not only important but also urgent problems might go unsolved.

More things to think about

Company trust comes and goes

Vercel, the frontend deployment platform that makes shipping apps easier, just lived a pretty bad moment. An X user who was quietly working on a personal project was approached by a Vercel employee threatening to take the app down because it was too similar to his own.

How could this person know this? Well, turns out he accessed Vercel customer databases for personal benefits. And yes, he got fired, but the company also took a major hit: many users developed a sudden lack of trust and are already moving their projects elsewhere.

Lesson: no matter the hype and good services you are able to create and provide, trust shouldn’t be broken.

Challenge the status quo?

DHH, the creator of Ruby on Rails, Basecamp, and HEY, just shook things up with his Rails World keynote, polarizing the tech world with his introduction to No Build, a notion according to which you need to ditch the JS-driven build processes modern web development is based on.

While this kind of take seems to be universal, I just think the theatrical tone is necessary to make some noise and challenge the status quo, which is almost always a good thing.

Why making software is cool

I’m not watching YouTube as much as I used to up until a few months ago, but Marko’s videos are always relaxing and inspiring to watch.

If you want to know why software engineers love being software engineers, watch this.

(as one, I can 100% confirm this, and you can probably guess it by reading this piece)

Inspiration for the newsletter

This is a pivotal moment for this newsletter: yes, it already existed, but I gave it a new name that matches the one I came up with a while ago for my “startup” (more like constantly changing side project), and I’m committing to writing regularly.

This comes from several realizations/thoughts:

  • I’ve always loved writing
  • these pieces can be used for videos too
  • Substack rocks
  • Substack publications rock (more on this below)
  • depth is needed
  • I want to connect with people who appreciate depth

Here are a few newsletters that hyped me up to start writing on Kleia:

  • Not Boring by Packy McCormick by Packy McCormick
  • Noahpinion by Noah Smith
  • Lenny’s Newsletter by Lenny Rachitsky
  • The Generalist by Mario Gabriele
  • Elena’s Growth Scoop by Elena Verna
  • The Pragmatic Engineer by Gergely Orosz
  • ByteByteGo Newsletter by Alex Xu
  • The Developing Dev by Ryan Peterman
  • Boundless by Paul Millerd by Paul Millerd

…and a lot more.

As Lenny says in his piece on getting 500,000 subscribers, it’s easy to get lost in low-impact details such as design, promotion strategy, etc., but the most important thing to get traction with a newsletter is to write good stuff.

That’s what I’m aiming for. And don’t let the prevalence of software-related thoughts in this issue get in the way: Kleia won’t be a software-only newsletter. Most pieces will be driven by a passion for innovation and an insatiable curiosity, but we have so much to talk about, on several different subjects and topics. I can’t wait.