Pages

Friday, November 10, 2023

Me and My Ai #2

 

Me and My AI #2

LLMs: the bath is overflowing

 

Large Language Models – ChatGPT et al – make for the generation of endless volumes of textual material in effectively realtime. We humans face the possibility of total and utter immersion in more ‘important’ and/or ‘essential’ reading than we can possibly take stock of.

Already, folk with a need to cover what’s happening in their specialised fields complain of the rapid uptick in material directed at them as contributors leverage Ai. You too may be subject to similar experiences.

How will content creation and consumption change as a consequence of the potentially exponential growth of written material within any given field of reference?

 

Early adopters of LLMs are already out there leveraging the window of opportunity between the widespread availability of the technology and the rest of the world catching on – selling material generated by or largely by generative Ai as if it is of purely human origin – and enjoying the margins that heightened Ai enhanced productivity grants them. OpenAI is already marketing ‘GPTs’ – mini ChatGPTs that the purchaser can pre-load with their own content and preferences and use to generate specific client-oriented content.

It's a cycle that’s played out so many times in the history of technology, particularly so for digital technologies. An exciting new tech with relatively low capital costs emerges and undermines existing markets – in this case, writers (hey, that’s me ..). The creators of the new tech rapidly generate a sizeable income. Early adopters put the tech to work and enjoy large margins. Other folk notice what they’re up to and pile into the same space.

Soon, consumers enthusiasm for this bright new thing begins to wane. Competition causes margins to collapse and the mergers and closures begin. A few years down the track and the tech is fully incorporated into the larger market.

 

Setting that familiar market cycle aside and focussing on this instance, specifically interesting is how the human intellectual and Ai technology networks might respond to this technological step up. How will we deal with all this stuff we previously might have liked to but now won’t have time to read?

Initially, apply a personal filter to select material and limit the volume of material vying for attention. Google and other search engines already do this in a limited sense. Other services such as LinkedIn, Twitter and Facebook also do this, allowing users to set filters of their own. So nothing particularly innovative in this response per se. Though augmenting filters with Ai will make for a better user experience. For example a filter that undermines the ‘clickbait’ tendencies many content producers insert in their material.

The longer-term is more interesting. A potential evolution in the information ecosystem triggered by the potential to create infinite textual content. Here’s one pathway:

As the consumer’s filtered stream versus the content generator’s flood model evolves:

-          The content being generated becomes more and more specific to the end user and their expressed preferences, interests, etc.

-          The filter and the generator enter into a negotiation process – after all the generator can create an endless variety of written material in realtime and the filter can re-write it equally quickly, so why would they bother.

-          Instead, the new information dispersed by a generator node remains in a ‘pre-textual’ form – the numerical values that can be processed by machine learning algorithms – and attaches to these values vectors and other metrics that weight the value of the new information in a form designed to appease the consumer’s filter.

-          The consumer’s filter weighs up the values and metrics attached to this information in relation to other available information sets and the consumer’s own preferences. Only then does the information set get integrated with other sets and turned into a natural language form suitable for consumption.

-          A mediator might modify vectors created by the generator’s attempts to insist its information is of the highest order, most recent, most relevant etc.

Such a system demands collusion between creators and consumers. Though the first generation of mass-market LLMs already effectively operate like this, but from the pile or the internet in general and not on behalf of specific creators or consumers.

Imagine the consumer, surrounded by nodes for information distribution that rise around like mountains, information racing down the mountains like the spring melt, creating torrents that rush through a steep valley. Off to one side, a filter sorts the waters from the waters and channels it on to the consumer. Visionary? Not really, more like fantasy.

 

Some time ago I wrote a very short and (I thought) amusing story about benevolent Ai’s competing to run the world with near perfect analytical and predictive powers. But occasionally humans do something unanticipated. The Ai’s would then race each other to best leverage (for commercial advantage and ultimately control of the planet) these rare instances of unanticipated human creativity and divergence from the world as predicted. Something like Asimov’s ‘mule’ in Foundation and Empire but wrapped up in a mere two minute read. Find “Uniquity”: here.

 

Another diversion on this theme before I fold: regulation. We’ve seen the understandable angst and anger of human creatives whose works have been consumed by LLMs, turned into numeric values suitable for algorithmic processing, and regurgitated without credit in the growing stream of LLM utterances.

It’s not hard to imagine, in response to LLMs’ unregulated siphoning of others’ creativity, governments attempting to regulate content such that it must display which of its constituent components or parts are human sourced and which are of the machine. This is already happening in tertiary institutions.

The written material above was written off my own bat with machine support for research.

 

To close: follows is ChatGPT 4.0’s far more pedestrian and sensible response to a query from me about how we humans might cope with the flood of written material that it and other LLMs can generate ad infinitum:

1.         Filtering and Curation: Search engines, platforms, and other content aggregators are continuously improving their algorithms to filter and curate content. This helps in managing the influx of information and presenting users with what is most relevant to them.

2.         Content Quality: While LLMs can generate large amounts of content, the quality and usefulness of that content can vary. High-quality, insightful, or unique content is more likely to get attention and be shared, while low-quality content may be ignored or filtered out.

3.         Consumption Habits: The impact of increased content also depends on consumption habits. People may become more selective about their sources and choose to engage with content that provides the most value to them.

4.         Regulation and Standards: There could be an emergence of new standards and regulations that govern the use of LLMs for content creation, which may affect the amount of content that is produced and published.

5.         Technological Advancements: Advancements in technology could also provide new ways to manage and navigate information effectively, such as personalized AI assistants that can summarize and prioritize content based on individual preferences.

 

Which kind of covers it, thanks ChatGPT. But leaves a lot to the imagination.

 

_ _ _ _ _ _ _ _ _

 

No comments:

Post a Comment