fbpx
(647) 243-4688

This is a Friday long-read, so grab a warm cup of something and kick back because we’re going to take our time on this. The world is about to profoundly change. I know you’re nervous – perhaps excited and optimistic, but if you’ve been paying attention and have been watching the trajectory of this thing, the rational reaction is to be nervous. In this post I’d like to unpack in practical and tangible terms what AI is, where it came from, and the state of play, and then I’ll show you a path that will give you a pretty good shot at surviving the coming revolution.

Who am I? I’m Mark Maunder and I founded Wordfence in 2011 and wrote the early versions of the product until 2015 when Matt Barry took over as lead developer and I morphed into a tech executive running Defiant Inc, which makes Wordfence. We have over 4 million customers using our free product and a large number of paying customers using our various paid WordPress security products. I’ve been a technologist since the early 90s and a kid hacker in my teens in the 80s and 90s. I started my career in mission-critical operations for companies like Coca-Cola, Credit Suisse (now UBS), and DeBeers and then went over to do dev for companies like the BBC and eToys which was one of the biggest dot-com busts. I created the first job meta-search called WorkZoo in the UK around 2001 which later competed with Indeed, launched after us, and which I sold in 2005 but which made Time Magazine and NYTimes. I subsequently launched a Geoblogging platform, inline comments via JS, an ad network, real-time analytics, a localized news website, and more. You could say I’ve been in the innovation game for a minute, and I’m in it for life. I’m based in the USA these days in case you’re curious – moved over here in 2003.

Examining Bubbles

My apologies for the long bio, but what I’d like to illustrate is that I’ve seen tech come and go. The hype cycles I’ve seen typically include:

Outlandish claims about how the tech will solve everything from slicing bread to world peace and everything in between.
Commercial vendors jumping on the emerging mega-trend to surf the wave with proprietary technologies of their own which they position as standards, or at the very least the default choice.
Nascent technologies implementing the tech, that are immature, unstable, rapidly changing, and may very well be abandoned in a few months or a few years.
The press contracting a bad case of rabies and foaming at the mouth about the tech, amplifying the most extreme aspects and use cases and creating a lot of noise, which makes it hard for implementors to sift for the truth and the fundamentals around the technology.
 The investment community pouring cash into the space with little focus. This creates an extremely adversarial environment for tech practitioners who are building fundamental value, who now have to compete with powerful VC-backed marketing machines.
As Warren Buffet says, the Innovators, the Imitators, and the Swarming Incompetents enter the space in that order. I’d add that they have a pyramid structure with each successive wave being at least an order of magnitude bigger than the last. Things get crowded for a while.
Then you have the typical bust cycle which cleans house and makes the tech uncool again, but also makes it interesting to the true believers. The VC’s go away and stop making noise that innovators have to compete against. The imitators and swarming incompetents drift off to imitate and mess up something else. The businesses not creating fundamental value fail. Some creating fundamental value fail too but the talent and tech are sometimes reincarnated into something else useful.

So who prospers, and what tech survives after a bust? Sometimes none of it, but helpful derivative technologies are created, like Java Applets in the 90s that inspired Flash which inspired standards-based rich content web browsers.

Sometimes out of the ashes, an Amazon is born, as with the dot-com boom. And sometimes you have incredible innovation where the innovators never see large-scale commercial success, but others do, as with Igor Sysoyev who created Nginx which eliminated the need for a data center full of web servers to handle large-scale websites. Igor has a commercial thing, but the real winners were companies like Cloudflare who based their global infrastructure on Nginx, reverse proxying massive numbers of connections to origin servers with rules about what gets proxied. Hey, I benefited too. Nginx saved our behinds when Kerry and I were running Feedjit.com from 2007 to 2011 because it let us handle over 1 billion application requests per month on just 6 servers. Thanks Igor!

Blockchain technology is in a bust cycle and you can map the characteristics I defined above to blockchain. It looks eerily similar to the dot-com bust and you’ll see an Amazon emerge from the ashes about a decade from now. It might be Coinbase if they survive the over 80% dive in market cap that may continue, but who knows?

Derivative Versus Fundamental Technology Innovation

I’ve mentioned a few tech bubbles so far and it’s really the first step in pulling us out of the trees so that we can examine the various forests out there. Now let’s go a little more meta and talk about which forests matter. Some technology is derivative. Examples of derivative tech are new stuff that runs inside a browser, for example, Websockets.

Websockets are awesome because they let a browser keep a connection open and get push notifications without doing the old TCP three-way handshake to establish a new connection every time the browser wants to check if there’s data waiting on a chat server or whatever. We used to call this long-polling and I wrote a web server to do long polling which was a clumsy but necessary approach, so when Web Sockets came along we all breathed a sigh of relief and I happily retired my web server glad that no one would see my nasty source code which worked quite well mind you.

Another derivative technology – and you’re not going to like this – is blockchain. It’s a useful and novel implementation of hashing algorithms and a few other cryptographic tricks, but honestly, we should have disintermediated banking at least two decades ago and the fact that blockchain has still failed to do that is both disappointing and illustrative that it is just a set of derivative applications built on a tower of fundamentals that has a way to go before it matures. The hype cycle and speculative bubble around it was simply humans making human noise.

So that’s derivative tech. In addition to derivative tech, you have what I’d like to call fundamental tech. Electricity is fundamental. It’s cornerstone technology that transformed the world in our ability to use and transport energy which has enabled an industrial and technological revolution the likes of which the world has never seen. The microprocessor is also fundamental tech for similar reasons. You have algorithms that are fundamental tech like the RSA algorithm which allows us to establish a secure communication channel while a bad person is listening in the whole time – the kind of tech that could have changed the outcome of World War II.

The Internet is fundamental tech. It connected the world and gave us the ability to build applications on top like HTTP and the web which are derivative tech.

Oh I know you want to have a bar fight with me at this point and we’ll do that if you’re attending Wordcamp EU – a collegial and metaphorical barfight, that is – but hopefully, you’re picking up what I’m putting down here in a general sense: There is fundamental tech that profoundly enables and changes the world and which many other things are built on top of, and there is derivative tech that gets a lot of attention but isn’t quite as transformative in a historical sense if you’re thinking in terms of centuries. And there’s the big fat grey area in between.

AI is fundamental tech. For decades we have been programming by writing functions by hand. We’ve gotten quite good at structuring our code using metaphors like object-oriented programming to create logical structures that make sense in a human world, and help us organize large code bases. But fundamentally the way we define logic in a program hasn’t changed for a long time. Until now. For the first time in all of history, we can create functions in programming by training them, rather than writing them by hand. In other, slightly more technical words, we can infer a function from observations and then use it. Like babies and toddlers do. This is historic, it’s transformative and it is a fundamental breakthrough.

Funny thing is that until quite recently – around 2015 – AI had suffered many so-called “AI Winters” where there was significant interest in the field that catalyzed investment dollars, and then a setback usually caused by a reality check, that caused a winter in funding and interest. Does anyone remember the “expert systems” of the 80s? By the early 2000s AIs name had been dragged through the mud so many times that anyone doing serious research in the field used different words to describe their work, like “machine learning” or “informatics” or “knowledge systems”.

A few hardcore true believers like Yann Lecun, Yushua Bengio and Geoffrey Hinton powered through like Bilbo and Sam across The Dead Marshes and went on to win the Turing Award, which is basically the Nobel Prize of computer science and which I had the privilege of attending when Rivest, Adleman and Shamir won theirs for public key crypto. The Turing Award is a very big deal and well deserved considering how adversarial the AI environment was for a while.

So what changed? Well for one thing you’re reading this post because it’s about AI and you’re interested. And you’re interested because you recently used GPT-4, MidJourney, Dall-E or another model to create something. You’re seeing tangible results. And the reason you’re seeing results is that GPU hardware, algorithms, and an interest in the field have brought us to an inflection point where the technology is delivering results that are jaw-dropping enough to catalyze more funding, more research, and more jaw-dropping results. This cycle really picked up steam in 2015, and with the release of GPT-4 recently, has entered a phase of what I would describe as true and consistent exponential growth.

According to NVidia “LLM sizes have been increasing 10X every year for the last few years”. In two years that’s 100X. Three years from now that’s 1000X and so on. Extrapolate that out and be afraid. Or optimistic if your mind isn’t for rent and you are hopeful yet discontent. Rush lyrics aside, that pace should give you an idea of how quickly this thing is coming. And now that we’ve reached the point of inflection I mentioned, where the hardware and algorithms seem to have overcome the cycle of disappointment that AI has been stuck in for decades, I predict that you’ll see consistent and exponential growth in the field in capabilities for the foreseeable future, with a financial bubble and bust in there that won’t be of much consequence to the fundamental value of the technology.

“Thanks for the history lesson Maunder, but you brought us here with promises of telling us what to do about AI. So?”

What to do about AI

So far we’ve discussed what boom cycles look like and the kind of noise and bear traps you should be aware of. We’ve defined what AI is in fundamental terms – a function that you can train rather than hand code. And we’ve hopefully agreed that we’ve entered a period of consistent and exponential growth in the field. Now we’ll chat about how to survive and prosper in a world that looks a lot like when electricity was invented and commercialized, or the microprocessor, or the Internet.

Goldman estimates that AI will add 7% to global GDP at a rate of about 1.5% growth per year. They also estimate that roughly two-thirds of US occupations are exposed to some degree of automation by AI. You can extrapolate this globally. That kind of global disruption is matched only by the industrial revolution or the entire recent tech revolution as a whole starting from 1980. From the same publication, “A recent study by economist David Autor cited in the report found that 60% of today’s workers are employed in occupations that didn’t exist in 1940.”. So on an optimistic note, this kind of disruption isn’t a new thing and we’ve been disrupting and adapting for some time now.

Perhaps you’re reading this because you are running a WordPress website, perhaps secured by my product, Wordfence. Which means you’re a creator of some kind. Perhaps you’re a writer, an artist, or perhaps you’re an entrepreneur creating a business out of thin air. [Yes my fellow entreps, you get to hang with the other cool creator kids too!!]. If you don’t plan on adapting at all, that makes you far more vulnerable to this coming wave than say a chef who runs a restaurant, or someone who manages real estate and rentals. And that really is the key: adaptation. So how can we adapt?

If you’re a creator, you need to become a user of AI. You’re probably already using GPT to write copy for your product catalogs on your e-commerce website, or using MidJourney (MJ) or Dall-E to create art for ad campaigns. If you’re a designer or artist, you may feel the kind of resentment this Blender artist does in the Blender subreddit.

“My Job is different now since Midjourney v5 came out last week. I am not an artist anymore, nor a 3D artist. Rn all I do is prompting, photoshopping and implementing good looking pictures. The reason I went to be a 3D artist in the first place is gone. I wanted to create form In 3D space, sculpt, create. With my own creativity. With my own hands.”

“It came over night for me. I had no choice. And my boss also had no choice. I am now able to create, rig and animate a character thats spit out from MJ in 2-3 days. Before, it took us several weeks in 3D. The difference is: I care, he does not. For my boss its just a huge time/money saver.”

While I sympathize with how hard change and disruption can be, it’s been a constant for the past couple of centuries in many fields. MidJourney has a long way to go before it can match a real-world artist, unless you’re just churning out images and letting the AI guide the design choices and are happy to work around the bugs. For MidJourney and other generative AIs to produce exactly what we want, they’re going to have to get better at understanding what exactly we want to create. And that’s where the skill comes in. You’re already seeing this with a document that someone has created listing famous photographers and examples of their look. This can be used in MJ prompts to say “in the style of” to get a specific look, but it is an incredibly rudimentary approach.

Another way to guide the MJ AI in particular is to blend photos it has generated. Again, super rudimentary, but it’s the start of having the ability to tightly specify exactly what you want and get that out of MJ. And if you need a reminder of how basic it still is, try to get MJ to generate hands. It still sucks, even at version 5.

So if you’re a creator, start getting good at using the tools now, understand their limitations, and evolve as the products evolve until you’re an expert at guiding the AI to create exactly what you want. This will help you guide your customers in explaining the limits of the current state of AI to them and where you add value, let you take immediate advantage of the use that the current tools have, and ramp up your productivity as the tools get better at taking instructions from you.

This applies to writers, artists, designers, filmmakers, photographers, screenwriters, and anyone with creative output. Get good at the tools. Get good at them now. Do it with an open mind. Know that changes aren’t permanent and that change is. (Again with sneaking in the Rush lyrics)

Adapting as a Dev

Coders! My people! We have a problem. Most of you have become users of AI. You’re users of GPT-4 via their API. You’re plugging into other generative AIs via an API. You aren’t rolling your own. And rolling your own is where all the fun is!!

Ever heard of transfer learning? You can grab a pre-trained model from Hugging Face, chop off the head – aka the final layer in the layers of neural nets, substitute it with random weights, and train the pre-trained model with your own data to take advantage of the sometimes millions of dollars that someone else already spent training their model. In fact Facebook’s LLAMA model which is one of the largest LLM’s in the world was leaked via Torrent recently.

The most important thing you need to do right now as a developer is to stop being a user of AI and become a dev of AI. GPT-4 is a shiny ball that the world will have forgotten about in a year, but it’s a very shiny and attractive ball right now that is fueling many a late-night dev chat. Remember that stat I gave you above? That LLM’s have been increasing in size at 10X per year. The current state of the art will be accessible to you on a desktop in a few years and you need to get ready for that world today.

I’m going to just go ahead and tell you what you need to do to get your AI stuff together, fast.

Ignore the math. Trust me on this. Most people including devs are not good at math and it intimidates the hell out of them. AI is just matrix multiplication and addition using GPU cores to parallelize the ops. Expressing this as code is easy. Expressing it as math will make you hide under your bed and cry. Ignore the math. If you can code, you’ll get it.
Learn Python. Everything in AI is Python. It’s a beautiful little language that you’ll come to appreciate very quickly if you’re already a dev. It’s like coming over to Aikido if you’re already a black belt. OK the MMA scene kinda messed up my metaphor proving that Aikido is actually worthless, but whatever.
Then go do the Practical Deep Learning for Coders course at fast.ai. It’s how we get our guys up to speed fast in the field and it’s brilliant. Jeremy Howard does a spectacular job of getting you up to speed fast in the field by immediately getting you productive and then unpacking the details in a fun non-mathy way.
As you progress in the course, definitely get up to speed using Jupyter Notebooks and I’d recommend Kaggle for this. They were bought by Google a few years ago and kind of compete with Google’s own notebook system called Colab, but I prefer Kaggle. You get GPU access by simply verifying your email address and it’s free which is kind of amazing. So you can use a rich text environment on Kaggle to write your code, see the output and run it on some fairly decent GPUs. Kaggle GPU’s perform at about 20% of the speed of my laptop RTX 4090 in case you’re curious about benchmarks.

The course teaches fundamentals, how to use pre-trained models, how to create Jupyter Notebooks or fork others, how to create Hugging Face Spaces, and how to share your models and their output with the world. It is the fastest way right now to transform yourself from an AI user into an AI dev and get drinks bought for you at parties by folks that have not yet made the leap.

Alright, this went long but that was the plan. We’ll talk more about AI. Go forth, be brave, learn, and create!

Mark Maunder – Founder & CEO – Wordfence and Defiant Inc.

Footnotes: All images on the page were created with MidJourney and if you’d like to see the prompt I used, simply view the image in a new tab and the image name is the prompt, all except for the heavy metal hands image which a colleague created. I’ll be in the comments in case there’s discussion.

The post Friday Long Read: What To Do About AI appeared first on Wordfence.