Author: Robert

  • Block-based Forms

    Old-style WordPress forms packages do a lot of things, but then again, they really don’t do a lot of things you’d think would be really obvious. One thing you can’t really do these days is build forms directly using ordinary WordPress blocks.

    Now, there are a good half-dozen prominent forms packages in the WordPress community and some of them are quite impressive and innovative. And free, to some extent: It’s not at all unusual to find that the free version of forms packages will let you collect info from users about just about anything you can imagine, because you can label your fields and input gizmos however you like and interpret the data they collect accordingly.

    But this only works because the forms don’t really do anything with the data beyond emailing it to you.

    And then, blam!

    So, here’s the thing. I’m in the process of putting a forms package into the WordPress.org repo, and it, too, doesn’t do anything with the data beyond mailing it to you.

    There’s a twist, though, for those with a little programming know-how. You give each form a unique name. If you also create a PHP file with that form’s name and put the file in a folder that’s reserved for such things, then this file will execute before the email is sent (if, in fact, you’re having the email sent. That’s optional).

    If you use it this way, then essentially all it’s doing is helping you easily put together a conventional form using normal HTML and the regular submit process. Strangely, this is harder to do than you might think. Most of the other form packages have moved over to REST or Ajax calls to submit their forms. There are some good reasons to handle things that way, but it makes it much harder, if not impossible, to step in and take over the processing once the user clicks ‘Submit’.

    Surprisingly handy

    I was surprised to see that this, all by itself, could be pretty darned handy. My surprise stemmed, I think, from coming at the whole thing sort of backwards. I’d started by focusing on having a custom database that forms automagically wrote to on submit. The forms were just the front end for a system, which was where my real focus lay.

    But just throwing the form together quickly and then not having to deal with the baggage that the form package provider has tacked on (a different add-on for each integration, and so on) turns out to be handy, at least for me. My hope is that it’s handy for at least a few folks out there as well.

    There’s a premium product in the pipeline as well, which I guess is no surprise. It adds in the back-end stuff I mentioned above. And then things get really interesting, but that’s a topic for another post.

  • That Opening Keynote at WCUS 2024

    Ah WordCamp! The event opened on a Thursday and by Friday late afternoon Matt Mullenweg had driven a wedge into the WordPress community that still remains to be sorted out.

    Eventually, I got past the drama aspect of the event and started thinking about the content, and one thing that stuck in my mind was the opening keynote. Joseph Jacks, the founder and general partner of a venture capital group called OSS Capital talked about the difference between what he termed “closed-core” and “open-core” business models.

    I just went back and viewed the video of the talk and I think there’s a lot more to be said about “open core” and how it does and doesn’t work. We’ll get to that.

    The Bittensor bit

    What I also noticed was that there are a couple interesting bits thrown in almost as an afterthought at the end, when Jacks, obviously enthused about the project, took a few moments to talk through what the Bittensor project is.

    He was talking about the way that basing a project on a blockchain can allow a community to enforce that everything that happens on the chain is open source.

    Who’s source is open?

    Jacks said: “It’s also related to something that Matt was blogging about maybe yesterday or the day before which was… we have this kind of phenomenon in the industry where people are trying to say that their models are open source, and they’re really not open source.”

    His example for this was Facebook’s Llama, but it’s interesting as background for the way Mullenweg was thinking about the way that the loose couplings between open source projects and companies that exist because those open source projects provide a platform.

    Couple me loosely

    I think the issue that the WordPress community is concerned about at present is how that “loose coupling” is defined and managed. There’s been a sort of “loose consensus” about what’s acceptable for, say, plugin businesses, but clearly rather different understandings of acceptability have developed.

    It’s also interesting that Mullenweg, on the cusp of attacking WPEngine in no small part because it was now owned by an equity firm that he accused of taking value from the project and community without giving anything back, picked a lead-off speaker who invests in companies that build atop an open-source ecosystem.

    Mullenweg is clearly trying to reshape and tune the loose coupling model–and much of the pushback from a frustrated WordPress community accused him of hypocrisy regarding this precisely. These complaints grew louder, of course, when Mullenweg subsequently took far more active control over the WordPress.org site and its plugin repository. The community understood the .org site to be a community asset; legally it belonged to Matt.

    The question of who governs

    I’m not sure how the rift between Matt and a substantial fraction of the WordPress ecosystem will be resolved, but for me it raises the issue of how open-source projects should be governed.

    Funnily, I had somehow forgotten the tacked-on part of that opening keynote. I was excited to see how there already exists a fairly large open-source project underway that has a radically different governance model. Because what controls activities and decisions in Bittensor is rules built into the blockchain and the possession of the chain’s currency token, the tao.

    “What it also does,” Jacks said, “is it allows the user to basically actively participate in the ownership and governance of the model, so you don’t have a single company controlling the AI that gets produced.”

  • Airtable things

    Since the stuff I’m building these days has similarities to some of the things Airtable does, I figured why not build a version of the request queue I’ve built with PeakZebra, only with Airtable.

    TLDR: Throw in all the AI you want, once you’re past just storing everything in a single sheet, it’s pretty easy to make a false step (or use the AI to try and do too much) and you wind up with something that just doesn’t work. And it’s possible that it won’t be obvious to you that it’s not working like you think.

    Airtable is pretty cool and there’s no question that it’s got way more polish than PeakZebra does just now, but then again, there are ways in which it’s considerably trickier to build things with Airtable if you’re coming in cold.

    Magic AI bullshit

    These days, it almost goes without saying, Airtable has some magic AI bullshit built in. The promise is that you describe the app you want and it builds it.

    Honest to god, I didn’t try to trick it into screwing things up. I tried to come up with a concise description of what I wanted my request queue to do. I’m afraid I didn’t preserve the actual prompt I gave it, but I can tell you that it created tables that appeared to do the right things, but the relationships between them weren’t at all correct.

    So the AI stuff was a bit of a crock, but that’s not really what I was interested in getting at in any case. And if you just wade in and do it yourself, there are all sorts of things Airtable does that are very powerful.

    Easy and powerful things

    It’s very easy, for example, to connect a field in one table to records in another table. For instance, if you have a table where you store all the requests that are coming into your request system, it’s easy to tell it that the client field should come from the clients stored in a client table.

    And there are some nice touches. When you click on the client you’ve chosen in one of the request records, it pops up a view of all the data in that client’s client record. Handy.

    So the app let’s you establish clients and let’s you create a form for adding requests that are associated with your client identity.

    Twist once for death

    I wanted an extra twist, though. I wanted to be able to let a client create a bunch of associated requests–all the tasks needed to carry out a given project–but not necessarily put all of those tasks into the request queue.

    So I wanted a project table to store the name of various projects, plus a tasks table to hold all the tasks associated with all the projects of all the clients.

    This is easy enough. It’s also straightforward to add a field that toggles to tell you whether a given task is currently in the request queue.

    Doing this, though, means that you don’t want to store requests in a request table, but that instead you want the request queue to be a view of the tasks table that filters on tasks that are flagged as being in the queue. You can also add a further filter or grouping to show the queued items by project and/or by client.

    Wait, one more twist

    Ah, but another twist. Some things are projects with a bunch of different steps, each one handled as a separate task. But other things are one-off things that need to be done and that really aren’t part of a task. Say you’ve got a website and you want to request a change in some content on that website. That’s not a project.

    You could create a project for “non-project” tasks, but what’s conceptually cleaner is to have things in the queue either be tasks from projects or be standalone tasks. There’s no obvious way to have a field be populated by data from either of two tables.

    Now, let’s be clear. I don’t have any doubt whatsoever that there’s a way to do this. There’s almost certainly several different ways to approach it. But if you’re trying to avoid learning lots of technical minutia about the Airtable environment, you’ll hit a wall with something like this.

    Sometimes a little code is the magic

    The PeakZebra approach doesn’t preclude figuring out how to do the trickier stuff yourself, but as part of the basic arrangement you also have the option of just, well, using the PeakZebra request queue (on PeakZebra, not Airtable) and just asking us to do it for you.

    But my real point here, I think, is that services like Airtable and Notion and others are focused on making everything work for everybody without requiring any code. And this can sometimes completely obscure the ease with which the same thing could be accomplished in a “low-code” approach with just a couple of lines of code.

    And oh by the way: I’m not as against AI for coding (and similar code-like tasks) as it might sound like in this post. I’ve been doing more and more AI assisted programming of late, and there are definitely things about it that make me loads more productive.

  • More Cursor Programming in WordPress

    I just spent about thirty minutes creating a couple of functions. I can’t really say I wrote them; it was almost entirely done by AI. I think it’s a nice example of ways in which Cursor and AI can really shine, so I thought I’d spend a couple minutes walking through how it went.

    PeakZebra (the system, not the company) has a SQL table that stores data about each of the other SQL tables that PeakZebra creates and uses. The format (for better or worse, and I suspect it’s probably worse) is that the string you’d hand off to the dbDelta function is stored in its entirety.

    The table of tables table

    While initially building PeakZebra, this made it easy for me to quickly change the fields in any of the (now rather numerous) tables I use. I’d just jump into a PHP file where each of these strings was added to the pz_table_str table. An example looks like this;

    // interaction table
    $table_name = $wpdb->prefix . "pz_interaction";
    
    // interaction table
    $item = array();
    $item['id'] = null;
    $item['table_name'] = 'pz_interaction';
    $item['field_string'] = "CREATE TABLE $table_name (
      id bigint(20) unsigned NOT NULL AUTO_INCREMENT,
      per_id bigint(20) NOT NULL DEFAULT 1,
      summary varchar(255) NOT NULL DEFAULT '',
      details varchar(800) NOT NULL DEFAULT '',
      created varchar(12) NOT NULL DEFAULT '',
      PRIMARY KEY  (id)
    ) $charset;";
    
    handle_def_record($item);
    dbDelta($item['field_string']);

    the handle_def_record($item) call writes the string to the definition table. Then dbDelta actually creates the file.

    If you’re not familiar with dbDelta, it’s pretty handy. It creates the file you’ve described if it doesn’t exist. But if it does exist, it examines it and adds or deletes fields as needed based on comparing the existing file to your CREATE TABLE string.

    I wanted to add a way for users to add fields to tables (and perhaps to create new tables), so I wanted to grab an existing string from the table with the defining strings in it, parse it into the individual fields, and then present a form where new fields could be added (deciding to worry about deleting fields later).

    Toward that end, I opened the PHP file of a “table” block I’d recently created and gave Cursor this prompt:

    let's add a function called read_table_def that reads the record with the table_name field equal to 'pz_person' and returns the field_string value for that row. 

    Cursor gave me this:

    // ... existing code ...
    
    function read_table_def() {
        global $wpdb;
        $table_name = 'pz_person';
        
        $result = $wpdb->get_var(
            $wpdb->prepare(
                "SELECT field_string FROM {$wpdb->prefix}pz_table_def WHERE table_name = %s",
                $table_name
            )
        );
        
        return $result;
    }

    Some things of interest here:

    1. Although it could confer that I was working within WordPress from the file I’d opened, I didn’t explicitly say so.
    2. It figured out and knew enough to know that it would need the $wpdb global.
    3. It wisely uses $wpdb->prepare(), something I’ll admit I’m often to lazy to bother with.

    Now, the code it supplied was correct, but I’d screwed up and told it the wrong file name, insofar as I’d forgotten the WordPress database prefix.

    Debugging and debugging the debugging

    As a result, I asked it to add a bunch of debugging code, which it did, leading me to realize, within ten minutes or so, that I’d screwed up.

    At that point, I asked it to add the prefix, and it did:

    function read_table_def($table_name) {
        global $wpdb;
        
        // Debug: Print the actual query
        $query = $wpdb->prepare(
            "SELECT field_string FROM wp_pz_table_def WHERE table_name = %s",
            $table_name
        );
        echo "Query: " . $query . "<br>";
        
        // Debug: Print the table contents
        $all_records = $wpdb->get_results("SELECT * FROM wp_pz_table_def");
        echo "All records: <pre>";
        var_dump($all_records);
        echo "</pre><br>";
        
        $result = $wpdb->get_var(
            $wpdb->prepare(
                "SELECT field_string FROM wp_pz_table_def WHERE table_name = %s",
                $table_name
            )
        );
        
        // Debug: Print the result
        echo "Result: ";
        var_dump($result);
        echo "<br>";
        
        return $result;
    }

    Of course, I didn’t want the prefix hardcoded. An easy fix. I asked Cursor to remove the debug code, which it did. Then I moved on to a function that would take the string and parse it into an array of individual field definitions.

    I’ll skip over the details, but once again, I’m fascinated by the things it “figures out” on its own, for example, telling me that it’s making the assumption that the fields are comma delimited, but it’s an easy fix if it’s not right about that.

    It’s strangely like working with an actual mental process. It’s probably terrible news for junior programmers, if for no other reason that it pretty much never screws up things like the parameters and formats of arcane system function calls.

  • The Creator Business

    I think the creator business is probably a little confused about itself, about where the edges of what’s a creator business can be found, but that’s fine.

    What I like about the general concept is how most creators have some important web and online needs in common. Most other businesses have at least some parts of the same set of needs, but the scale and interconnection of the tools used to address those needs is actually fairly different.

    It’s hard to imagine a creator business getting much use out of Salesforce (though no doubt somebody’s about to tell me otherwise). It’s too complex, requires too much interaction on a per client or per prospect basis, and so on.

    Alright, maybe there’s even an argument to be made that Salesforce could make sense when dealing with the actual thousand true fans if that’s the way you’re thinking about what you’re doing. But even there I don’t really see it.

    Home is where your home page is

    You need a web home. You need your own mailing list of prospects and followers. You need a mailing service to get email campaigns sent out, possibly you need a drip campaign type of capability.

    You need to be able to keep the books, possibly you need to track inventory, possibly you need to generate invoices. But you don’t, most likely, need the whole kit and kaboodle of a full-blown accounting package (even one targeting small businesses like QuickBooks). You may wind up using some of the more conventional tools, but you’ll just be using the outside edge of what they can do (which includes all the things you don’t need).

    You may have paid subscriber needs, or you might want to be supported by a more Patreon-like (pay by the work product item, for instance) approach, and so on.

    Build by plugging things together

    What you need is a sensible platform where you can maintain your own web presence and, ideally, layer on the tools you need in a way that keeps things minimal and manageable by a solo operator or a small team. Almost all the things you need to do can be handled by a seeming universe of SaaS operations with annual subscriptions, but it you’re not careful you wind up paying a big stack of monthly fees for things that you wind up figuring out how to interconnect into a system on your own.

    While PeakZebra’s initial product vision wasn’t targeted specifically at creators, it’s use in the creator economy became increasingly obvious as we moved forward building our toolset. You want newsletter signups, but not lots of extra baggage managing your lists. You’d like interactions with users that let you learn more about them individually, but in a way that allows you to mass customize the content you present each one.

    You need subscriptions? We do it by harnessing one of the most-used WordPress plugin options (but you see it as part of our offering–no setting up, configuring, and learning to navigate completely new systems). You need reminders sent to members whose subscriptions will expire soon? It’s in there and it’s dead simple.

    We’ve got some things to add before this makes total sense as a use case for PeakZebra, but we’re well on the way, so if you’re a creator, you might want keep an eye on us.

  • A Modest Licensing Proposal

    Hey OSS folk: we need to start thinking outside the conventional GPL licensing box. We need a rational ecosystem for paid plugins and themes (in WordPress) and analogous capabilities on other OSS platforms. I think we can create a much better arrangement for all involved.

    I envision a licensing system that allows participation in an overarching governance system. If you want to take part in a project that uses this system and use its associated .org repositories and so on, you’d get a license for any internet-facing deployment. You’d pay a small licensing fee, let’s say five bucks a year, calibrated downward as needed to take differences in international buying power into account. Five bucks gets you a vote. For some number of sites, let’s say two dozen, a single five buck payment would get it done, but for big players, measured by sensible metrics but I’m not sure which, more cash would be involved. And bigger players would have more (but not infinitely more) votes.

    It’s a board!

    The main task for voters would be selecting a three or five-person board, but for really large issues (classic editor or blocks, to take an example from the past), direct votes might be called.

    In a greenfield new project, I’d propose a mechanism where the project creator got a big block of votes, such that they could be benevolent dictator for a while (I think there’s value in this, early on—clarity of vision and so on), but as popularity of the project grew and more people acquired licenses, the control would naturally and gradually shift over to the community as a whole.

    Who runs the thing that run the things?

    Who would run this? I imagine a few pillars of the OSS world forming some legal entity that maintained the license sales and voting procedures. It would be possible, maybe even preferable, to run this on a purpose-built blockchain, but it’s hardly a requirement, as the community would just need to trust whoever was running it.

    Now then, what about the money? All the things you might expect the money could be used for would be what the money was indeed used for, at the discretion of the elected board. And we’re talking about a substantial amount of money, the kind of money that drives early marketing campaigns, but clearly also full-on development in the form of sponsorship for core developers and, well, other things like a project’s training videos on YouTube…all the things.

    Not GPL

    To make it work, we’d need to step away from GPL licensing. That sounds severe and even ill-advised, but just because something isn’t running under a GPL or MIT license doesn’t in the least mean it can’t be open source.

    With a non-GPL license, we could have the benefits of open source, but create ways for companies writing plugins and the like genuinely to keep control over how their code was used. The code could remain open, or mostly open, but no one would be in a position to simply take code and sell it as their own (something GPL expressly allows). If the takeover of ACF as SCF didn’t feel right to you, this is how to fix the rules that make it possible.

    If we went this route, selling licenses for use of the governance services of new OSS projects, we could deal with problems such as though currently creating havoc in the WordPress community in a straight-up fashion. Under this kind of arrangement, Matt Mullenweg would have migrated out of a BDFL role years ago. We can’t undo the WordPress GPL license (if we even wanted to), but we could avoid wasting our time wringing our hands and making squeally noises. That’s right: squeally noises.

  • A Twin-Star Site Model

    I don’t love the name, but I think the creator economy is a real-enough thing. There are at least a couple million creators on the web, I read somewhere, who make a living at it.

    My impression is that they either go with something like Patreon or Substack as a way to platform themselves, or use Gumroad to sell things, or exist more or less entirely on YouTube, raking in the ad money. Maybe you can run your whole shop on Patreon (they’re introducing a community capability, founder Jack Conte even has a whole theory about how this makes sense and indeed, he does make some sense).

    Creators and cobblers

    It seems like most creators cobble both their online presence, the tools they use to manage that presence, and the back-end tools for accounting and stuff out of various pieces.

    I suspect a lot of time gets wasted in the cobbling and the learning involved with these tools. Fact of the matter is, for most creators, they typical tools are just way to feature rich and, as a result, complex.

    I’m not a creator in the sense that I’m talking about in this post, but I have done some of my own cobbling, enough to notice that, for instance, Quickbooks is just way, way, way more capability than I need, because all I need is to send out and track a few client invoices. And at nearly $40 a month for Quickbooks, I’m paying way too much for the privilege of letting them email my invoice forms.

    So I’m dropping Quickbooks in the new year and the general plan is to eat my own dog food. (I hate that expression, I think because I find life analogies that use food inherently coarse. But I can’t think of a better alternative–send help.) I’ll use PeakZebra to knock out a dead-simple invoicing system.

    PeakZebra(s)

    Whatever I pull together using PeakZebra, my plan is to evolve PeakZebra to support a “twin site” scenario where one site (PeakZebra.com) is the public facing site and another site (PeakZebra.something_else, presumably) will handle the back end things.

    This means that some elements will reside on the front-facing site, things like newsletter signup forms, to pick the simplest example. But when that signup form is submitted, it will be sent via an API call to the PeakZebra code on the backend server. The data from the form will be stored in the backend server’s SQL database, and when it comes time for me to do something with the data stored there, I’ll log in and use apps on the backend, while the front end site hums merrily along.

    Enhanced security

    With the right setup, this is a more secure approach to managing things like subscriber data, because you can put a lot of controls in place around who access that site than you can on a site that you want anyone and everyone to be able to at least see. And while WordPress is secure when properly configured, it’s safer still if the data isn’t even on the visible site.

    We’ll see how this goes–it’s not an immediate priority to have a “twinned” site arrangement, but I can still work on the sorts of simple tools I want, running them for now on PeakZebra.com but eventually migrating the backend stuff to a backend WordPress install.

    Is a twin site actually more secure? I think so, but I also think the crux of the question comes down to how secure you think the API calls that the front will make to the back will be. For my money, those can be locked down pretty darned tightly.

    You wind up with an arrangement that most attackers won’t have encountered before, with API calls being made from server to server. The attacker will not ordinarily have any way to see the API request data, nor will they be able to see the second server on the net unless they find themselves within a fairly narrow IP address range.

  • Imagining a WordPress Greenfield

    Just suppose, just for a moment, that you and I were tasked with creating something every bit as wonderful as the wonderful parts of WordPress, but starting with a (mostly) blank slate.

    It’s the plugins

    Well, one thing we have to get out of the way right away is what to do about the huge number of useful plugins that are the particular strength of the WordPress platform. Do we want to walk away from the ecommerce plugins, the learning management kits, the membership tools?

    If we want our cake and the eating of it, we’ll have to reckon with the immutable fact that all of that stuff is written in PHP and it all runs on the server. There are no lambda plugins. There are no client-side plugins.

    And it’s PHP. Some folks argue that PHP is antiquated, but I think that’s about fashion more than sense. It’s a fairly sophisticated language, performant, all that.

    All JavaScript?

    But as long as we’re declaring a fresh start, you’ve got to reckon with the hard truth that PHP doesn’t run on clients. Essentially, JavaScript is the only choice in that regard. And if you’re running JavaScript on the client, it makes life a lot easier to be running the same language up on the server.

    If we change gears and use Node.js on the server, then the challenge is finding some way to continue using existing plugins.

    I’ve turned this over and over in my mind. On the one hand, it seems pretty likely that a WordPress-specific translator could be built to turn plugins in Node.js plugins. If we do that, though, then we have to support all the action and filter hooks.

    So, on the other hand, maybe what we want is to make it easy for plugin providers to rewrite their code bases anew. Assuming an approach that was generally similar to WordPress’s approach, developers would have a pretty good sense of how they should approach various tasks.

    For example, you’d probably want get_current_user() to be getCurrentUser() and you’d probably want it to return a user ID. And you’d want to have roles and the roles would be collections of capabilities.

    Post much?

    That’s easy enough (maybe), but do we really want to commit to preserving the concept of everything being a post? We’d have to give that one some thought, but maybe the easiest way forward is to stick with posts and pages and custom posts and such.

    There’s a lot of complexity in core, though, the inevitable result of organic growth over twenty years, and maybe we just carefully sort through and discard a lot of the baggage that still works but probably was never such a great idea. And maybe we tack on a new concept or two, like having a React-like routing system be the default.

    But what we want is for WooCommerce defectors to have a straightforward sense of how to write a new and significantly more straightforward ecommerce solution. NewCommerce?

    Astro adoption?

    There’s another question to consider: given the complexity of building this sort of system, possibly the best way forward is to start with an existing system and then add on whatever stuff is needed to support critical plugin rewrites.

    This is the line of thought that has landed me at the front door of the Astro community. If you’re unfamiliar with Astro, it does content sites, like our friend WordPress. And it’s server centric, like WordPress. But it’s also vastly newer and thus not yet covered in all those rustic barnacles.

    It appears to have themes. It doesn’t, as far as I can tell, have plugins in the sense one has them in WordPress, but it seems at least conceivable that an interface to a plugin system could be created. It doesn’t have an in-built editor, but again, that seems like something that could be whipped up. Or, in a funny little twist, I feel pretty darned confident that the WordPress block editor could be pressed into service.

    So I’m going to explore Astro. Not because I’m so convinced that the current troubles in the WordPress world are going to lead to WordPress falling apart, but because I think we all need to think about hedging our bets. And, frankly, because there may be better options out there in the blog-and-content-creation universe.

    So, more to come on this. One clear takeaway, though, is that WordPress has built up an enormous and enormously useful ecosystem and feature set over the years. Leaving would be painful.

  • Headless WordPress and why it matters

    You know there’s headless WordPress, but may not be clear on how you’d make it happen. Or, more importantly, why you’d make it happen.

    What is headless WordPress?

    Let’s start with a quick rundown of what makes a WordPress site headless, why the naming in this case is exactly backwards, and just generally get ourselves on the same page.

    The conventional headful approach

    Normal WordPress is a world in which the action happens on the server. A website visitor requests a page from the server and the server assembles the page components (header, body, footer) from the database and any relevant templates. This is sent to the browser, any browser at all. And if something happens down there on the browser, it will result in a new page being requested from the server.

    Where this basic operation is perhaps most clearly visible is when the site provides some kind of data application. Maybe it’s a CRM application, so you might request a list of clients in the system. You get a display of the first 25 of them from the server, say. If you want to see the next page of clients, a new page will be requested from the server. If you want to see a particular client, a new page will be requested to display that client’s information. If you change the information for that client and want to save it, you’ll submit a form to the server and a new page will be delivered to show the update.

    Meanwhile, in the rest of the universe

    For most of the rest of the web, this isn’t typically how an application works, however. If you start with an application that shows a list of client records, then when you want to see the next page of them, a request will be sent to the server to retrieve only the data for the clients that need to be shown. The page with the client list won’t be replaced; rather, the new set of clients will be displayed on the existing page where the previous clients were listed.

    It’s possible that you can edit any of the client fields you can see on each row of the listing. Let’s say you do this and press a save icon at the end of the row you’ve changed. Again, this doesn’t result in a new page being requested. Instead, the listing continues to show the change you made and the change is sent as an update request to the server.

    The server, in other words, is just supplying data at this point, not pages (though, in our scenario, it probably supplied the initial listing page).

    Decouple this

    There are two things we should notice about this scenario. First, there’s got to be some kind of back end that answers requests for data and updates, even if it’s not supplying the pages. Second, the pages still have to come from somewhere. But the pages and the data don’t really have to come from the same place, and thus we can say that the presentation and the data have been decoupled.

    When you decouple the head of a thing, well, it becomes headless. So the baseline idea of headless WordPress is that there’s a WordPress server running, but it’s not supplying the pages that the website visitor is seeing.

    So where are the pages coming from? That depends, but most scenarios out there on the web right now fall into the basic pattern of using React (or some React framework that extends React) to create pages that can be retrieved from web servers as plain HTML and JavaScript files. These pages aren’t assembled or calculated on the server end, they are simply sent to the client as they stand. You’ll hear these scenarios referred to as static sites. That’s because the server doesn’t muck around with them–they can be plenty active once they are displayed in a browser window.

    One question that may already have popped into your mind is: what is React? And that’s an excellent question, but not one that we’re going to answer in any detail here. Suffice it to say, it’s a pre-built set of capabilities implemented in JavaScript, where the capabilities mostly have to do with user interactions.

    The key thing is that JavaScript and React are capable of asking the server (or more than one server) for data that it needs to display. The server that sends the data down to the browser in the headless WordPress scenario is, you guessed it, a WordPress server.

    Headless

    There are plenty of headless scenarios where the server isn’t a WordPress server and there are even scenarios where there arguably isn’t a server in the traditional sense.

    But we’re talking WordPress here. In that scenario, there are two primary ways that WordPress might interact with whatever’s going on down there at the browser window. It may, in the older and more widely adopted approach, use a REST API to make requests for data (or requests to place or update data on the server). Making a REST call is based on requesting a particular URL and it either places any changeable data at the end of the URL (as parameters) or it arranges them in the same way you might arrange data when posting a form to a web server.

    The other approach out there these days involves using a Graphql interface. This is more like opening a window directly into a database and making queries. The details of this don’t much matter for this discussion, the point is that it’s possible to install a plugin that creates a Graphql access point for a WordPress site.

    Wait, but why?

    Why would you take this headless approach, though?

    The obvious first answer is that it enables you to have a different language and framework running on the client side of things. If you want a React application that serves up a lot of server-side content, using WordPress as your CMS might very well make sense.

    Additionally, though, it gives you the capability to render and rerender a page in sections, so that you aren’t necessarily requesting a whole new page from the server every time anything happens.

    Now, as it happens, you can pull off this same trick using the new Interactivity API in WordPress, because it makes the front end capable of doing various things on its own. It let’s you build a “headless-seeming” user experience completely within a WordPress context.

    It’s not clear yet how well the Interactivity API will fare, as it’s still relatively early days, but it’s an interesting option for dynamic front ends (plus it’s in use within WordPress core, so it’s not likely to go anywhere anytime soon).

    WordPress makes a pretty solid CMS, particularly where the content is of the human-readable sort.


  • Creating a Block with Cursor AI

    I’ve been trying to figure out what the best approach to getting the most productivity out of AI-assisted coding in Cursor. Some things work jaw-droppingly well. Others create a rabbit hole of inexplicable coding failures that are more trouble than they are worth to debug and make work.

    Here’s a very simple example of something I was working on this morning: I wanted a WordPress block that would show an alert on a page with whatever message I put into it, and I wanted it to disappear on its own after it had been on screen for ten seconds.

    Keep it simple

    The takeaway, if you don’t care about the details, is this: you should stick to relatively discrete steps you’re asking it to achieve and you should check each bit carefully as you assimilate it into the code.

    As a placeholder, I’d been using a block from the WordPress repository called simple-alert-blocks. The simple part of the name doesn’t mislead. Nothing wrong with that, but it doesn’t fade.

    So I thought, I’ll add Cursor to make changes to it so that it always disappears after ten seconds.

    Only too happy to oblige

    Cursor happily did all this–it even looked pretty good as code, just scanning over it–and it didn’t work. Absolutely nothing happened. I started by asking it to debug the problem and it happily provided me with a “fix,” except that the fix simply re-applied code that was already there. So this improved nothing.

    I took a few minutes to look at it, but wasn’t seeing the problem. Later I realized that it had probably created a mismatch between the class name it was applying to the message box and the class name it was using in the css file. The class in the css file was .wp-block-pz-alert whereas the actual element in the DOM was using .wp-block-create-block-pzalert.

    I say this was most likely the problem because I didn’t have the code by the time I figured it out, but that CSS mismatch was a theme throughout the whole process.

    Keep your context ungoofed

    Leading to another takeaway: if Cursor does something substantially screwy, don’t just fix the problem, reset your context so that it isn’t still chewing away with the wrong idea in the back of its mind somewhere.

    So then I asked it to just create a block that did what I wanted from scratch. This one also looked pretty good, but wasn’t registering the block once I’d activated the new plugin (for once I didn’t forget the activation step). Again, strong suspicion that the mismatched CSS was in play, but there was also some idea that I’d inadvertently introduced because the original simple alert box was still in context that there should be a separate .JS file that had a “fade” function in it rather than just including this in the view.js file.

    Doublechecking

    At this point, I decided to take the AI part in far smaller steps and now created a new block using the create-block script. From there, I asked for specific steps and checked that each step worked. It occurs to me that Cursor currently makes about the same number of mistakes as I do, so I should doublecheck my progress in more or less the same way.

    This doublechecking process eats away at the time advantage of having Cursor just blast out a whole plugin, but is still way faster than my usual process. This is in part because it remembers all the function definition and syntax details that I routinely forget. It’s massively more capable than the sort of autocomplete you get with something like Github’s Copilot.

    Even in this process, it fouled up the CSS again. But it did lots of other things–using details that are very specific to WordPress block development–and had no difficulty with it.

    I asked it to add an attribute to the block, one called “messageText”:

    Note, though, that it decided to freelance on the “name” attribute. I did not tell it to create a pz domain and there were lots of other blocks in context that do it the way I wanted it, so conceivably it might have figured it out.

    Anyway, adding the attribute worked just fine, but in typical computer fashion, there were plenty of obvious related tasks that I had to specifically ask for.

    Initially, the suggested changes didn’t include importing the TextControl component, so it didn’t work and this is one of those problems that doesn’t really throw any errors, it just quietly refuses to register the new block.

    I asked it to fix this and it replied with the cheery bullshit one sometimes gets from LLMs:

    And then the line was there. But you have to wonder…

    Anyway, when I really got down to brass tacks and was explicitly asking it to make changes in chunks that I could quickly doublecheck, the process went quickly. It would have gone even quicker if I’d reset the chat and started with fresh context.

    More reports from the coding trenches to come…