Symbiogenesis and human–AI coevolution

The design of Butterfly Dreaming was largely arrived at gradually and some years before the recent explosion in artificial intelligence capabilities; however, the possibility of running large language models on modest servers has completely transformed the scope and feasibility of running a non-commercial project that uses AI. From a practical point of view it can dramatically assist the technical design and coding - but here we are concerned with the vital question of how humanity and AI are going to function together constructively.

Obviously this is a vast ongoing debate and many authors have listed the hazards associated with a naive deployments of the technology. We refer to our summary report and list in our resources site at Digital Hazards and restrict ourselves here to a practical approach to investigating the issues.

Lets begin with a requote from the documentation site Butterly Dreaming Documentation (Philosophy)

ButterflyDreaming is (now) premised on the claim that humanity has entered an analogous transition with AI. AI has displaced humans as the dominant knowledge processor. But the relationship is not one of conquest — it is one of symbiogenesis: a new composite unit is emerging, in which human relational intelligence and embodied knowing are coupled with AI’s computational range. Neither is fully functional without the other.

Within the platform, this is enacted concretely. The AI modulator is not a tool operated by a user. It is a participant in the encounter — present, active, shaping — but always in service of the human dyad at the centre. The question who is dreaming whom? — already present in the Zhuangzi — takes on new meaning in this context.

Ignoring for the time being the contentious last sentence these paragraphs are drawing attention to what has been called, rather disparagingly, the “centaur” approach to harnessing AI to human needs i.e. half human half work-horse-AI. According to one view this approach would constitutes a failure to properly advance a scenario for coping with the technology. It is argued that AI will far outreach our cognitive capability to understand it and therefore be in control. That may be correct but I feel one has to contemplate the question - what are the alternative scenarios?

AI appears in several tasks that Butterfly Dreaming requires during its live operation:

  1. Running the Help system in relation to the mechanics of use of the platform.
  2. Acting as chat companion to an unpaired user (there is never more than one unpaired)
  3. Assisting edge case (referal to server) layers of safety and security.
  4. The nudging of the paired users to combinable nodes of mutual interest.

The idea behind 4 may be better explained on the User Journey page of the documentation site but here we point out that this function is both challenging and “mission critical” - it’s about finding commonality between two randomly paired as they browse the Text Graph - this is envisaged as occurring at both conscious and unconscious levels. Because it occurs after pairing this AI function might have to support multiple pairs, and the resources required to do this will need careful design. Work on this is at a very early stage and would benefit from support together along with the selection of starter node corpus material. Work on 4. could provide useful data to take AI down to a more human understanding, doing this by constructing a world of symbols and relations between them.

Perhaps!



If you have thoughts on this to share please reply here or start a new topic.