The Triadic Continuum

I wonder if they have started thinking beyond assembling data for future recall … but also performing data finding data and relevance finding the user processing at ingestion. The principle being: at the exact moment new observations (data) are ingested into a persistent context data store … also happens to be the cheapest moment computationally to detect relevance (insight). [More here: Sensing Importance: Now or Never]

[… theoretically, each individual particle of data occurs only once within the structure …]

I wonder if they have been playing with the concept of also making disambiguation assertions during the ingestion process. [More here: Entity Resolution Systems vs. Match Merge/Merge Purge/List De-duplication Systems]

[… a computer data structure that is self organizing - in other words, a data structure that naturally organizes new data by either building on the existing data sequences or adding to the structure as new data are introduced.]

This makes me wonder if they have been working on the ripple effect that can happen when a new observation invalidates an earlier disambiguation or relationship assertions. [More here: Sequence Neutrality in information Systems]

[… the format and organization of the Triadic Continuum not only hold the representation of the data, but also the associations and relations between the data and the methods to obtain information and knowledge.]

First, it appears they are thinking of context in the same light as I do. [More here: Context: A Must-Have and Thoughts on Getting Some …]

Maybe I am reading too much into this, but I also wonder if the words "relations between the data and the methods to obtain information" have anything to do with commingling new observations and user queries into the same data space. [More here: What Came First, the Data or the Query?]

[… has developed and patented a new data structure …]

Having thought a lot about how data structures govern function, I do sometimes think about persisting the data in something other than an SQL database engine to get even higher throughput rates. Although, then I ponder how to make up for the freebies that come with industry standard SQL engines like transaction consistency, restartability, and a large community of trained DBA’s. So I wonder what thoughts this team has in this area. [More here: Big Breakthrough in Performance: Tuning Tips for Incremental Learning Systems]

No matter where these folks are in their evolution – first or tenth generation – I am real excited to hear about their work.

Hopefully, one day, I will have the chance to chat with these folks. Kindred spirits I suspect.