Cyber-Physical-Biological Symbiotic Systems

I’m interested in cyber-physical-biological symbiotic relationships from several angles.  Focusing here on the larger scale, you might look at it from an infrastructure point of view.  With today’s expanding IoT and communications network scope and functionality, we may think frequently about increasing sensing capabilities in our environment, However, we tend to think in terms of an infrastructure that may be very useful to its owner/operator(s) but, to the extent that humans are “sensed” in some manner, if anything at an individual level, this infrastructure invades our privacy even as it offers (one hopes) some larger social good – perhaps enhanced security or traffic flow or deepening scientific understanding of environmental changes.

Much more interesting are those technologies that could turn the human’s relationship with the sensing infrastructure on its head. To this end, I am exploring in scientific terms what a plugged-in human of this nature could look like. Clearly, the current system is as much a product of legal and economic factors as it is of technological limitations. After all, someone has to pay for this infrastructure.

Putting those challenges aside, as a thought experiment, I am proposing to examine a scenario where the infrastructure exists for individuals to plug into, for their immediate benefit. In the more distant future, this might extend all the way to being able to plug into and out of, at will, a hive mind. In the more tangible intermediate future (yet still some decades out), even without the transfer of consciousness inherent in a hive mind concept, the proposed scenario would enable us to plug into the extreme sensing capabilities provided in and by the environment, as an extension of ourselves. I look to both nature and technological development for integration with the existing human brain, to maximize the benefits of the additional, distributed sensory perceptions.

The individual components of the envisioned technological solution relies on a combination of engineered prostheses (internal and external) to provide additional capabilities not currently possible; psychotropic agents to enhance and stimulate latent capabilities of the brain; various aspects of synthetic biology to create, for example, genetic modifications or symbiotic relationships with internalized organisms; and related neural + computational control systems. The latter is particularly critical if we are to extend the distributed sensory system into enhanced actuation, but equally important simply to prevent damage to the “naked” human mind due to a vast influx of completely unfamiliar and overwhelming quantities of data. Each approach has its drawbacks, and a multi-pronged approach is almost certainly required.

Applying these technological advancements in the “infrastructure scenario” we can envision moving through an environment in which we can, at will, plug into – and therefore receive and process – data provided by the infrastructure: The enhanced human system may experience, for example, the variation of a magnetic field or radiation levels over a whole range, or the sum total of the waste energy released as heat due to nearby human activity as well as how best to locally capture it, or any other myriad examples. This may happen as naturally as viewing the interplay of light and shadows over that same range is for most of us today. One important aspect of this scenario is the ability of the enhanced human to choose to turn on or off these capabilities at any given time and place, and to fully utilize the data available, rather than only the system “choosing” to track (or not) that particular human’s movement.

Many of the individual components of these technologies exist, in various stages, though several are nowhere near close to being viable in large-scale human experiments. From a control systems point of view, however, the really interesting questions lie in how to integrate the required inputs and outputs of this enhanced human. The human, in this case, is the mobile platform that moves about the system somewhat unpredictably and accesses it at will, in different ways, from different places. To what extent will this new type of control system rely on human-powered brain-as-computer?  Where are the opportunities (or requirements) for human-controlled (or activated) distributed computing? For ideal function of this enhanced human, where is the proper balance between human-centralized (whether biological or engineered/artificial) and edge computing and learning? It’s this area that I’m primarily interested in.

As always, I welcome emails for further discussion.

By |2018-08-28T06:36:58+00:00August 28th, 2018|Embedded, Robotics and Ai|0 Comments

About the Author:

Dr. Karin Hollerbach’s core competence includes creating connections across disciplines and using them to solve complex business and/or engineering problems. A central theme in her career has been to quickly grasp complicated scenarios and implement effective and lasting solutions. With her unique combination of technical depth and leadership skills, Karin has helped companies expand globally, develop products and technologies, license technologies and attract/deploy investment.

Leave A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: