Arcan-A12: Weaving a Different Web
26 Jan 2026
This article is a companion piece to “Arcan Explained: A browser for different Webs” which covered how Arcan works as a browser engine. Some key takeaways from that article are:
The focus is only on running networked applications where the outermost one takes on the responsibilities of window management and display control, becoming the ‘desktop’.
Document browsing is a compilation step through separate tools that generates a signed, shareable application package.
It is recursive (an application can embed others, including itself) and can compose and interact with allow-listed local software.
Media decoding, media transforms, network communication and system integration are all delegated to per-instance sets of interchangeable privilege-separated programs.
It is essentially a browser take on a microkernel architecture. The choice in network communication program will control resource retrieval, link resolution, discovery and so on. This determines what kind of a web you end up in.
The reason why this is posted here and not on the main Arcan site is to emphasise this decoupling. It is but one possible solution, and there will be better ideas out there, without someone having to boil the ocean in order to try them out.
This article covers the design and choices of the included default
implementation (afsrv_net), its command-line helper tool
(arcan-net) and how they leverage the A12 protocol to form
a web.
It is organised as follows:
In Recalling the old ways we take a trip down memory lane back to the days of bulletin board systems to look for good bits to bring back into style.
In A frayed web of separate worlds we argue that the ‘World Wide Web’ is anything but ‘World Wide’ and cover a set of problems that we want our solution to cover.
In A12 as protocol we go through the characteristics of the protocol that we are building the rest on top of.
In A12 Web we layer in designs to address the problems of a frayed web using the A12 protocol, along with some properties we want it to have.
Finally, in Developer Story we get practical and go through the use of the tools available to actually build something.
Recalling the old ways
My early access to the Internet started through Bulletin Board Systems, specifically ones that translated select Usenet discussion groups and relayed e-mail over a system called ‘Echomail’.
If those words are unfamiliarly to you, a BBS was mainly someone sharing a slice of their computer to others over a telephone network. This was served to one or a few people at a time because each active connection required a designated telephone line and those were expensive.
See also: The BBS documentary.
Most of the boards I frequented had a very personal feel to them. It was more like being allowed inside someone’s computing living room than being presented with a streamlined and well-dressed corporate facade.
It was at once both intimate and intimidating on the rare few occasions when the SysOp (“System Operator”) decided to “break into chat” and your browsing of the local wares was interrupted by a seemingly inescapable one-to-one chat window.
Finding out about these places was an interesting journey in itself. Initial discovery was by word of mouth through a friend of a friend. As things sobered up, magazines took to providing listings. Here the experiences were on the milder side. There was less profanity and rarely mischief manuals (such as “the Anarchist’s Cookbook”), erotica or pirated software. Secondary discovery came through sitelists which the sysops curated themselves, or that someone had dumped into a shared upload directory or snuck into the ‘release information’ files packaged inside some piece of downloadable software.
Often enough a phone number did not actually lead to the board in question. Instead of the familiar tone of the modem connection handshake you got the voice of someone who did not appreciate being woken up at two in the morning – perhaps you had misread the operating hours/days (not everything was 24/7) or the sysop had grown tired of it all and closed the thing down. Such is life.
From this shallow description alone we can see the outline of some sort of troubled web story: Through word of mouth links between a named entry (Larry’s Land of Leisure and Suites thereof) and an address (here, phone-number) you could access resources, including links to other resources.
As far as links go, they were not particularly good:
Unidirectional - unless told, the one being linked to was unaware of who referred to them and thus had no say as to whether the extra attention was appreciated or not.
Non-descriptive - you had to try and resolve the link to figure out what it actually linked to, or if it was even valid in the first place.
Constraints as a side channel - “open between 08.00pm to 06.00am Fri-Sun” was something that might be mentioned on a login screen (less than helpful) or on the sitelist. Accuracy varied wildly.
Local resources were not addressable - you could point someone in the general direction of something, but once they were connected to the board itself the true hunt began.
Local termination - resolving a link does not let you find further ones without processing the resources at the linked location.
Ephemeral - should the address mapping change, there is no mechanism to rediscover the linked resource through other paths in the network.
Much of this and more still apply to links on the world “wide” web. To their credit URLs did fix addressing local resources (for a time) and DNS did something about the ephemerality, only to be undone by shortening services.
Still, think how much less useful even the first versions of HTML would have been without local resources being part of the language of the ‘link’. The technical solution is about as trivial as can be, yet the consequences are massive.
As a snack for thought, take the ‘local termination’ part: this is rather unusual if you think about the underlying data structure; it is a graph, but it is not exposed as such. A link may lead to other links, but first you need to ‘scrape’ (download, parse, extract) them from the outer resource.
This left a big discoverability hole to fill, one big enough to spawn some of the wealthiest companies in the world. It may be tempting to think how things would turn out if the other weaknesses had been addressed as well.
The obvious elephant in the room is Ted Nelson’s poetic Xanadu and how it keeps being brought up as the ‘what could have been’ solution because it used backlinks to avoid unidirectionality and transclusions for subcontent referencing.

Alas, the rest of its ‘web as living document’ story had a very niche appeal, and the browser part was vapourware for the longest time and then very quirky and unintuitive. As important as the linking story is for the base layer of a web, there also needs to be something there: content is king.
Tech doesn’t just go away, but rather move into one of many retirement homes for retrocomputing romantics. I don’t recall a strong inflection point where the BBS stopped being my primary source for information and exploration, it “just happened” as an uncoordinated silent shift. The Internet just naturally became the new default, but the transition and loss of something could be felt.
The real kicker was the shift in discovery, with search engines as the natural new starting point, some being Web while also somehow not being ‘Web’, like NTNU.no’s FTP search. They quickly became the way to discover content. More seasoned explorers, like the late and great +Fravia pivoted from using disassemblers and debuggers to reverse the dynamic structure of computing, to unpacking the search engine ‘command-line’ to reverse the structure of the web.
Other discovery solutions were curated collections of links as ‘portals’, like the original Yahoo. Those were similar to magazine sitelists, which also switched to providing links as URLs rather than phone numbers.

It might seem comical and distant now but there was real value in buying a computer magazine for suggestions on where to go on the web. Several Internet Service Providers at the time did go through the extra effort of bundling subscriptions to such magazines. Still even echoes of BBS form discovery remained for a while in the shape of ‘Web rings’ and IRC chatbots serving sitelists.
To shorten the story somewhat I will skip past the evolution of forums into ‘community’ sites, how they were replaced by ‘social media’ and so on. Instead I will simply suggest that the ‘web’ oscillates between different modalities, like (open, distributed, public) to (closed, centralised, invite-only). The nudge needed to switch the trend from the one to the other, is spam.
The ‘open web’ has, to me, become the least useful resource on the Internet, and that rapidly accelerated to large hadron levels with the sheer amount of bullshit synthesis that has weaseled its way in between me and whatever I was searching for or whoever I was communicating with.
I am anything but alone in feeling this disconnect. For a well worded view on the matter, look no further than Splitting the Web. That is, if your browser still allows you to, and the link is still working.
A frayed web of separate worlds
One thing that the many web pundits I have spoken to over the years have in common when asked to narrow down what the web ‘is’, is a certain glee over unity. Grab a (reasonable) device! And a browser! Browse the web! From anywhere!
This doesn’t answer the question and does not match any reality of mine. I have reasonable devices in every form factor, and an even larger pile of completely unreasonable ones. I use several browsers, but it is an exceptional day if two browsers on two different devices behave even close to the same.
Heck, if two browsers run on the same device but gets different geolocated source IPs they are served different content on the regular, and more so when using choice examples, say between China to Japan or whether the almighty Cloudflare deems you worthy or not. That is a rather narrow idea of ‘world-wide’.
For every other click I am supposed to prove my humanity by clicking a box or waiting a few seconds while my computer crunch pointless numbers. Possibly both.
So where is it? Hardly in the protocol. Otherwise we could have dropped that part of the URL long ago. Even before shifts towards the likes of QUIC or TLS (or is it SSL?) becoming ubiquitous, browsers implemented a wide range of them, from Gopher and FTP to RTSP.
You also won’t find it in the myriad of document containers or the resources referenced inside. What once was ‘Flash’ or ‘Silverlight’ is now an exercise in computing necromancy to relive. Is that javascript of yours following ECMAScript.1997 or 2025?. One day you might get to see that PNG as JXL, if the Gods so decide.
If you are not big on appealing to the authority of the W3C, what you are left with now are the links and how people use them. As we implied previously, the properties of ‘links’ and how they are discovered and enable discovery largely controls what kind of web that emerges.
Tangent striking dischord
Discord is an interesting phenomenon that deserves to be included here for a handful of reasons. I will fight the urge to attempt a larger breakdown, but it does serve as a connection to the BBS story from before, albeit in sheep’s clothing.
To the owners it is probably the happiest of little accidents, just like the Covid pandemic was to Slack. The numbers displayed on Wikipedia and friends are probably imaginary, but suggestions around 3 million accounts in 2016 to well over 500 million in 2025, and a little less than that actually active, seem reasonable. That is quite something for what was basically paratext and coordination around gaming – maybe the ‘Linux desktop’ could learn something from this.

The connection to the BBS story is that it gives you a turnkey solution for spinning up your own ‘personal’ ‘server’. In reality it is anything but, merely a namespace where you only get to partially define a small set of the rules, but at least you get to pick an icon or something.
The actual agency is more like the old surveillance camera meme:

Technically it is completely uninspired. The browser story is Electron as a bodge patchset on top of some dull variant of Chromium. Somehow it is always my lucky day when launching as ‘pretend consent’ language to force you to update whatever to whatever just to then automatically download some more updates.
The linking story is somehow worse than plain web. First you have one form for local object link embedding. That one is unlikely to be externally content addressable URLs, thus not shareable.
Then you have pseudo-resolution of regular URLs (into ‘previews’) that then gets forwarded to another browser, even though that is most likely the same code you were already running.
Its search story is spartan but also telling; a basic command-line with some magical prefixes with search space limited to that of the current ‘server’. I suspect it is very deliberately so before IPO, and that a possible sell later is both training data when all other wells have gone dry, and a chatbot interface to deliver masqueraded ads and bias to some; dark secrets to others, all based on what- and who- you are willing to pay.
That something like the TOS violation Searchcord, managed to spark anger and controversy over basic search of partial indexing across public servers (that opted in!) is also telling of the average user expectation.
Content wise it is a hotspot for the usual uninteresting filth like grooming; violence; expressions of carnal desires; bullying and brigading. That is something of a variable to monitor as part of a larger health and sanity check. If it is completely absent I get suspicious. If it is overflowing, I walk away.
Still, even though the presentation has all the personality of a wet fart captured in a bag and painted grey, it is my goto for some slim chance of an actual interaction with an actual person over a niche interest. That is not gaming paratext, but areas as diverse as CNC Machinery, Pinball Repair, Laser engraving, Electronics and Reverse Engineering.
My point is that even though the building blocks and overall purpose is wrong - a strong and playful human connection is still possible and can spring up in the most unlikely of places. We’ll need that going forward.
A12 as protocol
Time to get technical and cover the last building block before also getting practical. The base A12 protocol was introduced here: A12: Visions of the fully networked desktop.
It provides means for sharing an interactive media source (like an application window or composited desktop) to a sink in either a push based configuration, like how X11 remoting worked, or a pull one like VNC, RDP, or SSH would.
That is done over one of many transfer channels, each being unidirectional and coupling a possible video, audio and binary ‘blob’ stream corresponding to source windows.
The philosophy was that of ‘one desktop, many devices’. This means having individual devices be responsible for providing one or many sources over a network, and the desktop finding and composing them together as seamlessly as possible.
Among its building blocks is the ability to redirect a source from one sink to another while it is still running. This was demonstrated already back in 2019 by ‘dragging a living window from one machine to another’ as seen in the clip below.
There is also an optional extension to the protocol that enables previously paired sources and sinks to find each other again by broadcasting sets of challenge hashed public keys backed by petnames. This turns cryptographic identity into the link itself.
This avoids having to rely on DNS, DHCP provided hostnames, mDNS or other naming services for local (re-) discovery.
That is not enough for what we need here, and that brings us to the final extension. It introduces a third possible role, the directory. It act as a rendezvous for discovery; traffic relaying; NAT traversal; shared and private file/state store; application hosting and source/sink match-making.
The directory server forms a messaging and storage namespace for each hosted application. By default this is broadcast between all sinks running an application. This works for light collaboration and coordination. The reference desktop environment for Arcan, Durden, uses this to synchronise clipboard and share input devices.
For something more refined we can slot in a directory server side control application to match. It uses the same structure as a regular Arcan application, but its role is to coordinate and regulate communication and to mediate access to other networked resources.
Such resources can be those that are necessary to the dynamic side of the application itself, like hosted media, indexing and search.
To achieve that there are some special functions in the scripting API
that we will return to in the ‘Developer Story’ section. Two of
particular note are link_target and
reference_target. Those lets us define different kinds of
links.
This leads us to the next section, as we can now form webs.
A12 Web
We have reached the philosophy of ‘the desktop, reaching out’ similar to the BBS as covered in ‘the old ways’ to counter the problems from ‘a frayed web of separate worlds’ and either sizzle out into obscurity or create new terrifying problems – we all know where roads paved with good intentions might lead.
To be more direct and practical we will explain things using the command-line tooling as a starting point. For development purposes, we have hosted an Arcan directory server at arcan.divergent-desktop.org for years.
Using something like:
arcan-net arcan.divergent-desktop.org explain
If there is a cached / petnamed entry already in the local keystore, e.g. ‘dd’:
arcan-net dd@ explain
Would:
- Create an outbound a12-connection to arcan.divergent-desktop.org.
- Generate an authentication keypair and query for trust unless known (TOFU).
- Issue a LIST command with [notify] enabled.
- Wait for a reply with a name field that matches ‘explain’.
- Issue a download request for key-associated state matching package ID from #4.
- Issue a download request for the package ID from #4.
- Verify integrity of package from #6.
- Verify authenticity of package compared to signature in manifest from #7.
- Unpack into temporary storage.
- Start runner process with sandboxing and I/O transfer channels.
- Inject any state from #5 and signal runner to execute.
- Join directory messaging group matching ID from #4.
- Map messages and dynamic resource access between runner and directory until termination.
- Cleanup and transfer state.
The ‘until termination’ point has three possible triggers. First one is the user simply shutting the application down. That would create a snapshot of application-persistent key-value pairs and upload into a matching slot from #5.
The other is that an updated version of the application appears (that would be signalled due to the notify- flag from the LIST command in #3). The default behaviour then is to initiate a download of the update matching steps (5,7,8,9) and have the runner store-restore state to itself.
The last is on a scripting error causing termination. The runner can be instructed to continue regardless; to snapshot, shutdown and retry; to rollback to a previous version. With any of these options the runner may also send a report rather than (possibly broken) state as per #14.
At this stage there is already a number of deviations from the hypertext transport way of doing things, and we have only taken a peek at the basics. Outside of the actual package format and Steps 5 and 13, the chain above is generic enough that it could as well have been a model for a mobile app store.
First, any code and necessary ‘offline first’ data is present in the initial package transfer. Its size and checksum is known as part of the LIST command so caching code+data is trivial. Package contents are signed, as is client managed state; altering either on the server side is a distinguishable error condition.
While hotly debated internally, the engine blocks any script loads outside of the signed package along with a small curated set of builtin ones. There is no facility ‘hide code in strings and unpack into eval()’ form of getting unsigned code to run and therefore no ‘middleboxes injecting code’ adtech style tampering.
Second, any state store is deferred to the user and their decision to
leverage (or not) an authentication key-bound server side store. There
is no need for neither cookies nor a ‘login’ form -
authentication primitives for the connection carry over to the
application layer and, if a server side processor is attached, salted to
an application-bound identifier as part of the infrastructure.
Third, traffic is owned by the hosting directory. This is a big shift
and ties heavily into the linking story. The engine configuration for
the case we discuss here gets its primary traffic through either
arcan-net or afsrv_net when the outer
application also runs the desktop itself. These, in turn,
exclusively communicate with the specified directory.
This calls for a short example to get any further. Say that you have an A12 web app that is about image sharing and communication around shared images with friends.
The general UI, layouting and chat overlay is handled by the static signed appl package. Dynamic chat updates come through message passing events. The actual images do not make sense to share in the bundle expected to load/run at startup, so they are retrieved on demand from the server. In the old HTML world, that would be something like:
<img src="https://some.site/path/image_name.png?bunch_of_state_dont_leak">
In an arcan appl, that would be:
local stdin = net_open("@stdin", net_callback_handler)
local image_file = open_nonblock(stdin, {}, "image_name")
load_image_asynch(image_file, image_callback_handler)
The asynchronous event handlers and transfer queueing hints have been
omitted for brevity. Here, net_open gives a reference
handle to the current network connection. Then it is used to initiate a
non-blocking serialised read of image_name that gets
forward to image decoding.
All resource requests follow this pattern. The directory server is able to route / cache / load whatever is necessary to fulfill a request, and the reference implementation has several options for this, but the point is that the directory owns the traffic.
For the client end this means that network filtering and monitoring can be very aggressive and request record/replay is trivial for both archival and development purposes.
Linking
We finally have enough context to discuss links. The linking model here has two forms: “unified” and “referential”. The referential link is user facing, so we start there.
When a connection is made over A12, the initial handshake covers the
expected local and remote roles: ‘Source’, ‘Sink’ or ‘Directory’. When
hosting a directory server, you can specify outbound referential links
through the reference_directory('myfriend') function call
in the config scripting API.
This will create a worker that makes an outbound connection, and when this worker is alive and authenticated, it will be among the results sent in response to the LIST command.
This allows local clients to open ‘myfriend’, either tunneled or redirected. The local and remote directory workers transfer public keys and other extended authentication primitives for ‘myfriend’ to transitively trust a new connection to some specified degree.
We can see a few properties for this kind of link:
Bidirectional, authenticated and revocable - A link can only be established if the linked entity agrees to it, and it lives only as long as both parties maintain a connection.
Typed - A path through directories always terminate at a directory, a hosted source, sink, or application and you know what you get in advance.
Presence is reachability - The linking entity updates its local directory to reflect connection changes, and protocol propagates this to active clients.
Rediscoverable - When walking the path of a link the clients learn about public keys of individual directories. The discovery protocol extension lets a linked entity be rediscovered even if the link itself has been revoked.
This is clearly not without trade-offs. Transitive trust models and managing petnames to authentication primitives to stop Zooko’s Triangle can get complicated fast, even for networks of only your own devices.
On the other hand, DNS is not necessary. It is completely possible to navigate only through a path of petnames. The idea is that in a well-distributed web, we run into six degrees of separation.
We don’t link to resources within a hosted application. That is deliberate.
Links being contractual sets a low cap on the amount of links that are feasible to maintain. That’s a feature, not a bug. The cost to resolve grows linearly with the number of directories in a path.
A unified link is also specified in the config scripting API by
calling the link_directory('myfriend'). The main difference
against referential links is that the connection is not visible in the
client presented list.
This is because the linking parties form a unified namespace of exposed applications and their respective worker processes synch host applications, files and instance server side script runners as needed.
Search and Indexing
The last thing on the Web menu here is search in the sense of ‘From what we know and can access, what best matches your query?’ and not the ‘that we or our sponsors think that you should see’ variation.
The cheap solution is of course to leave it in the hands of developers and see what emerges. While that can come as a side effect of growth in popularity, part of the ‘many devices, one desktop’ narrative means that lower level mechanisms would be useful.
There are a few parts of the protocol to leverage here.
One is that binary stream uploads/downloads have a few typed alternate slots. One such slot is METADATA. This means that with upload permission someone can pre-index/analyse locally, and attach that to the server side store going forward. Similarly, a controller providing, for example, an image hosting service can see that something does not have any metadata attached to it, fire up some analysis tool and attach the results itself.
Another part is that the name part of requests that trigger binary stream transfers reserves the ‘.’ prefix for protocol support use. The server implementation uses .monitor, for instance, to negotiate an interactive debug interface stream and .debug to collect crash dumps that have accumulated.
Another that gets special treatment is ‘.index’. Normally it can be downloaded as a means to list files in the private store attached to the key you authenticated to. It is also used to list available resources in the server-side store assigned to each controller by also specifying the namespace identifier that matches the application identifier that the controller is assigned to, and that would propagate across a network of unified links.
If you upload a file to ‘.index’ you actually slot in a filter that corresponds to your search query for that namespace. This will influence future ‘.index’ downloads.
The server also has an external resolver mechanism. To understand this, both file and index resolving first goes to the controller. This has the option to reject it, or to forward or remap into its local file-store. The later can be substituted for an external resolver that takes care of translating to other protocols, e.g. AT, IPFS, Torrent with caching. This tactic is also used for implementing unified linking.
There is a lot more to unpack here when it comes to protection against abuse and collaboratively reaching an accurate .index that all participants can sign off on -- but that is for a different day.
Developer Story
With enough protocol nuances in place we can dig into some of the practicalities in developing a networked Arcan application, but just enough to follow the web story here rather rather than the concrete APIs as such. For those there are numerous guides, step-by-step instructions and examples already in both Wiki and repository documentation format.
The only thing we need to recall is that an Arcan appl is a set of scripts with some minor file- and function name- patterns.
A minimal form is just this:
mkdir $ARCAN_APPLBASEPATH/myappl;
echo "function myappl()\nend\n" > $ARCAN_APPLBASEPATH/myappl/myappl.lua
Assuming that you have permission to install or update an appl (which are different permissions) to a directory server that will host it, all you have to do is:
arcan-net --sign-tag mykey --push-appl myappl somedir@
This will package, sign and transfer ‘myappl’ to the directory pointed to by ‘somedir’ in the current keystore. Creating an entry and generating keys can be done with:
arcan-net keystore somedir host.or.ip
It will also tell you the public key that the server needs to grant permissions to. Setting one up has a lot more nuance to it, but with the arcan-net installation you would have a config.lua.example to work from and then run:
arcan-net -c /path/to/my/config.lua
You can then test-run:
arcan-net somedir@ myappl
Which should just become a black window. Keep it running, but at the same time let’s push a change that would break it:
echo "function myappl()
bad_function_call()
end
" > $ARCAN_APPLBASEPATH/myappl/myappl.lua
arcan-net --sign-tag mykey --push-appl myappl somedir@
The already running client downloads/update and it breaks. If configured to permit it will create a crash report, upload it and rollback to the last known working one.
arcan-net --get-file myappl .report - somedir@
That will collect user crash reports, bundle them together and send them back to us (and pipe to standard output). The reports are valid Lua scripts, so we can have analysis tooling that itself generates an appl.
The same tactic is used for slotting in a controller, just with --push-ctrl instead of --push-appl.
To look at a more advanced example, we will take streaming media playback. The full appl code is as follows:
function myapp()
net_open("@stdin",
function(source, status)
if status.kind == "connected" then
play_media(source)
end
end
)
end
function play_media(ref)
local fio = open_nonblock(ref, {}, "appl:/test")
launch_decode(nil, "protocol=media",
function(src, status)
if status.kind == "bchunkstate" then
open_nonblock(src, {}, fio)
elseif status.kind == "resized" then
show_image(src)
resize_image(src, status.width, status.height)
end
end
)
end
The first thing to note is the net_open call. This explicitly says
that we want to access and communicate with the directory that it was downloaded
from. Should that call fail, we can assume to be running offline.
The play_media function pairs two asynchronous processes. The
open_nonblock part is used for non-blocking asynchronous file
input/output, both ones from within the appl package; from user-defined
namespaces and through existing processes via a reference handle.
The first call goes through the handle to the network connection, and the appl:/ prefix tells it to go through the controller side application specific namespace. Any extra controls about transfer queueing/buffering/parallelisation preferences would go into the passed option table.
The launch_decode part will spawn a media decoding process,
and by not providing it with direct input it will send a set of extensions it
supports (bchunkstate), that we ignore here and just pair with the resource
reference we got from previous open_nonblock call. On the first
'resize' event we set the associated video object to visible and size to match
the source dimensions.
Let's slot in a matching controller:
function myapp()
end
function myapp_load(cl, resource)
if resource == "test" then
return "test.mp4"
end
end
function myapp_store(cl, resource)
-- block all attempts at storing files
end
The _load entrypoint will be triggered when the client open_nonblock
is issued. Here we map it to an actual file in the server-side store. This is where we
can add additional permission checks or selection logic. This is where the previously
mentioned 'external resolver' also fit, if the config.lua set for the server
would say:
function ready()
local resolver =
launch_resolver("/some/executable",
function(source, status)
end
)
appl_set_resolver("myapp", resolver)
end
The mapped test.mp4 request would be forwarded to the process of
/some/executable where we can have advanced mapping to other protocols and
storage solutions.
Let's do something more advanced. We add an arcan-shmif client to the server-side policy
database: arcan_db add_target BIN xserver /usr/bin/Xarcan -exec wmaker. Then,
in the controller we add the following function:
function myapp_join(cl)
launch_target(cl, {}, "xserver",
function(source, status)
end
)
end
As soon as a client joins, the server would spawn an instance of an Xserver with the
'WindowMaker' window manager attached. This makes a loopback connection and registers
as a hidden source only visible to the specified client. The client gets a notification
that the source is available, and that would translate to a segment_request
event in the net_open event handler. If it accepts it:
net_open("@stdin",
function(source, status)
if status.kind == "segment_request" then
accept_target(640, 480, event_handler)
end
end
)
We now have the means to composite and interact with the source as if it had been launched locally.
All of this has been assuming that the client end has the Arcan stack installed and
available. This might not be the case for weaker 'thin' kiosk like devices or in a more
limited context, like one of those horrid vendor locked app ecosystems. Should
the full stack be present on the directory server however, and the
config.permissions.applhost option be set to a tag (group)
matching your authentication key, a simplified viewer that only implements the
a12 protocol parts, like Smash,
could be used if the client request the appl as a source and not as an appl
package download. This would cause the server to spawn an instance of Arcan
with a loopback connection as a directed source to the specific client.
The purpose of bringing all this up here is not as a practical guide, but to provide enough context to highlight a few things:
The tooling to browse, host and develop is the same. They are a property of the network solution itself, not ‘browser:developer tools’.
Updates are atomic and signed, making life much more difficult for parasitic intermediaries.
Every form of communication is explicit, from link contracts to how the appl communiate.
The execution form is local-first, into locally-hosted, into networked.
The scaling model is small nodes, large networks.
The 'frontend', 'backend' and server development model is one and the same.
Both client appl and controller are composable due to the naming scheme.
In Closing
There is a lot of technical details omitted here around decisions and trade-offs and there is yet a small time window for change before all this is locked-in, especially around server side APIs and small-scale payment processing (with GNU taler being a strong candidate) to avoid the 'need' (they will always find a way) for Ad-tech parasites.
Other future plans involve preserving and translating current web contents into this, with tooling for layering in collaborative features over the contents. There is still ample room to join in and play around like it is the 90ies again, for a timeline that doesn't look so dark and grim as the current one does.
That said, a substantial goal for all of this is personal agency over any kind of mass adoption -- I am stubborn, not naive. That anyone outside of a small group of tech die hards would go all Yippee Ki-Yay over this is a moonshot and that is fine.
There are a number of places in my life and home where the current web- and browser- story will be pushed out. Places like my network cameras, heat pump HMI, various maker devices, home theater, gaming gear, mobile devices and so on. If I can only revert that I might at least tolerate that I still have to order groceries in some throwaway browser on a throwaway device. At least for now.