View this email in your browser


A Periodic Newsletter on
Breakthroughs in Strategic Foresight 
             October 24, 2020               

Prof. William Halal

AI versus Humans: The Salient Issue of Our Time
Round 2 - Estimates Invited
We received thoughtful comments shown below from Jacques Malan, Peter King, Michael Newton, Mike Jackson, John Freedman, Dennis Bushnell, Chris Garlick, Margherita Abe, Michael Lee and Clayton Rawlings. Many thanks for your fine work. 

These comments propose other cognitive functions that should be included -- dreams, curiosity, logic, future framing. Some seem a bit too casual in assuming that AI will replace all human forms of intelligence (HI). A few make the crucial point that AI and HI will merge, a common theme that is happening now. 

Revised List of Cognitive Functions

The list of cognitive functions has been condensed to make it more intuitive and manageable.  We added the suggested functions and combined those that are similar to form small clusters of the following 9 functions:
1.  Perception, Awareness   Sensory experience through touch, sight, sound, smell, taste. 
2.  Learning, Memory  Information, knowledge or skill acquired through instruction or study.
3.  Information, Knowledge, Understanding  Information, knowledge, etc. processed, encoded and stored for future action.
4.  Decision, Logic   A determination arrived at after consideration.

5.  Emotion, Empathy  Mental reaction of strong feelings: anger, fear, vicarious emotions of others. 

6.  Purpose, Will, Choice  Ability to set a purpose and choose some action to attain it.
7.  Values and Beliefs  Ideas held in relative importance and considered true.

8.  Imagination, Curiosity, Creativity, Intuition  Novel ideas and knowledge gained without sensory input.
9.  Vision, Dreams, Peak Experience, Future Framing  Guiding thought, altered state of consciousness formed without sensory input. 

In this framework, an application (like a GPS navigation driving system) is formed by drawing on needed functions and integrating them into a workable AI system.  A complete collection of such applications would make up General AI (GAI or AGI), an artificial equivalent of  the entire human mind.
This list of cognitive functions may not be quite right, but that's a minor issue. This study is mainly interested in estimating the relative profiles of AI and HI and their integration, rather than accuracy.

Objective vs. Subjective Consciousness

There is also a need to make the crucial distinction between objective and subjective forms of intelligence. Let’s redefine the hierarchy of consciousness more precisely using the figure below to illustrate the differences between two general types of human thought.


The “objective” functions include perception, knowledge, decisions and other forms of factual information. Whereas the “subjective” functions focus on tasks that are inherently personal, or what cognitive scientists call “qualia” – emotions, choice, beliefs, vision and other ethereal functions. The subjective functions are also more powerful because they shape the objective level. That’s why religions and belief systems form the ideological foundation of societies, and even scientific paradigms.
Consider how the functions of consciousness are drawn upon to manage some project, such as using your car’s GPS navigation system.  The GPS satellites provide the car’s location instead of you observing road signs. In other words, the AI in the GPS system has automated this task, as well the location of roads and other knowledge, and stored it in the system’s memory. The system can then compare your location to your destination, learn how they differ and make decisions that tell you what to do.  
Although this illustrates how AI can automate the objective functions, it also illustrates that it cannot do the same for the subjective functions –  it cannot choose a destination. The choice of where you wish to go is inherently subjective; it is an act of purpose and will.  Therein lies the crucial distinction between what AI can do and what it cannot do. In short, an AI simulation is not the same as life. We may not understand what is unique about HI or the source of its special power.  But there seems to be an important difference between AI and HI.
It is precisely these subjective aspects of consciousness that are rising in importance. The most obvious example is today’s “post-factual” wave of those who do not believe in evolution, climate change, vaccination and other forms of established science. This is occurring because smart phones and social media have flooded us with such overwhelming data that we can’t sort out the truth from endless claims of fake news, conspiracy theories and other forms of disinformation. The result is that people increasingly rely on their subjective values and beliefs to find a way through a sea of nonsense. And as AI automates the objective work, humans are moving further into the subjective realm. In fact, the US and other advanced nations are passing beyond the Knowledge Age and entering an Age of Consciousness even now, though we may not like its current form. For instance, Trump gains his power by being a master at shaping consciousness. 
The rise of subjective consciousness is also driven by global threats like pandemics, climate change, economic collapse, mass automation of jobs, gross inequality and other crises we have called the Global MegaCrisis. To state the obvious, these are existential challenges that are not going to be resolved by AI alone. The MegaCrisis will require many decades of hard, creative human work to reconcile the conflicting interests of 8 billion people, if it can be done at all. See TechCast's recent study on Global Consciousness.
In short, the future will certainly benefit from having powerful forms of AI that automate objective work, and it may simulate subjective functions for various purposes. But the bulk of the labor force is very likely to struggle to manage a world of such complex subjectivity that only HI will be up to the challenge.  AI may never be able to provide those subtle but crucial subjective inputs that determine what should be done, how it is to be done and ensure it is done properly.  This should be even more important if we hope to control AI and keep it safe. 

The limits to AI were stressed by Isaac Asimov's rules for robots, which were pretty clear that humans should remain in charge by providing these subjective functions. In the well-ordered world of Asimov, robots would be safe as long as they are not given the freedom of agency.  They are not to act like people.

Now let's examine the evidence. Some HI is being done by machines now. For instance, see this example showing that a third generation of AI is coming that goes beyond deep learning to simulate human empathy.  In this example, an avatar listens to a soldier talk of his PTSD experiences and coaches him into resolution of the trauma.  Obviously, the applications could be huge, from automated psychotherapy to teaching to virtual sex.  

We often ask audiences if they think there is a substantial difference between AI an HI. A few brave individuals  usually say "No, there is no difference," but the vast majority (90% or more) insist there is a substantial difference. They may not be able to put their finger on it, but it seems intuitively obvious to most people that humans are unique. It may be that we are tuned into the higher wisdom of the cosmos in some Jungian way. Of course, we could be proven wrong as AI matures. That is the nature of this great experiment now underway as science advances. This little study is our attempt to anticipate the outcome.

Another good example is TechCast’s study on “AI and Future Work.”  We found that the threat of mass unemployment due to automation is likely to be resolved by pioneering a new frontier of “creative work” that can’t be done by intelligent machines.

This raises the profound question – “How much of HI is likely to be automated, and how much will HI continue to do?  Even if AI can simulate some aspect of human consciousness, what does that mean really?  Would it be the same as what people do? Or would it be just a rough cut at the real thing?  How would AI and HI work together? Controlled? These are difficult questions that bear on the future relationship between AI and HI. Let's see what we can learn.

ROUND  3 -- Invitation to Estimate AI and HI
We now ask readers to estimate for each of the 9 functions : What portion of "cognitive work," or "activities," can be done by AI over the next few decades (0 to 100%)?  What portion will continue to be done by HI (0 to 100%)?  Please note that AI and HI should total 100%. Where both AI and HI are active, we can assume they are merging in some way, such as a brain-computer interface.

The best way to submit estimates is to copy the list of 9 functions above and paste them in an email. Then add your estimates for AI and HI for each function. For each function, please also add your comments explaining your reasoning, give an example of this AI, what purpose it serves, how you think it will merge with HI, be controlled by HI, and anything else that may help to understand what's likely.

Please send your best thinking to We will use your ideas to provide results in the next edition of TechCast Research. 

Thanks for your support. The TechCast Team

Comments on this Issue from Our Readers
Jacques Malan, Disruptor, Futurist, Pyro-Metallurgist
OK, so whenever this issue comes up in discussion, there seems to be an either/or approach to the problem which I invariably need to push back on. 
There is a third path, largely ignored, but steadily growing in its deceptive stage for the last 20 odd years already. This is sometimes referred to as “The Other AI”, “Integrated Artificial Intelligence” or “Augmented Intelligence”. I believe it will switch to disruptive growth in about 5 to 7 years, if Elon Musk has anything to say about it (viz Neuralink).
The smartphone, VR and AR are all examples of this, as they are extensions of ourselves, albeit outside our bodies, and not examples of standalone Artificial Intelligence Tech as most people assume. When was the last time you bothered to memorize a phone number, a map, a birthday, a menu, or even a password………. These are all brain functions that have been moved to silicon in your hand and data in the cloud. At some point they will migrate into the body.
This trend will continue up to the point where AI, HI and AugI are so tightly integrated that they are virtually indistinguishable (…….Resistance is Futile…….)
OK, so now for the bad news. Unless we can find a way to de-monitize and democratize this AugI, only the rich would be able to afford it and the societal divide will widen even further.
The requirement for UBI (personally I’d prefer UBA) is almost inevitable as the impact of pure AI is vastly underestimated.
Maybe you should place the consciousness hierarchy functions into rows and HI, AI, and AugI in columns (don’t know, have not figured this out yet, but there is something there)

Peter King, Environmental Consultant
I am sure that you will get more sophisticated input, but I wonder where you put "dreams".  In this lockdown, common to many, I have been having quite lucid dreams.  I know in the movies rogue robots have dreams too, but I have never heard of AI simulating dreams.  If dreams are truly the human mind's waste treatment plant, then perhaps some daily wiping of superfluous memory may be possible in AI.  I just don't know where it fits into your 15 functions. Also, does "curiosity" fit into imagination or is it a separate dimension of human intelligence.  I often tell the story that when I was a child, I put a dead bird in a biscuit tin and sealed the lid.  A few weeks later I went back and opened the lid and was amazed to see lots of bones and flies.  In my child-like curiosity I wanted to know what kind of magic could transform a bird into flies.  I have always maintained that this curiosity was one of the motivations that made me decide to become a scientist. I think curiosity is also aligned with "wonder". As you sit outside on a clear night and gaze up to the stars, you are filled with wonder.  Somehow, I don't see AI simulating that expression of human intelligence.  I also wonder about the dark side of human intelligence. What makes the mind of a serial killer, a fraudster, or an evil President?  Is AI immune (or designed to be immune) from these dark motivations?  Anyway, just some thoughts from a lay person.  

Michael D. Newton
Several days ago, I picked up what seems to be a helpful guide to critical thinking, and I am reading through it with growing respect for the authors, Richard Paul and Linda Elder.
Perhaps what is "missing" in our endless discussions is logical thinking, i.e., the ability to question a dialogue partner not necessarily about being "wrong" or "right," but being able to point to fallacies in the structure(s) of their argument(s).  Being keenly aware of everyone in a discussion knowing the "ground rules" (logic, reasonableness, intellectual honesty, etc.) might go a long way to defusing what sometimes becomes a heated exchange that does not completely address whether the point(s) made are sound and can be logically and independently verified.
Something similar can be said for Getting to Yes: Negotiating Agreement Without Giving In, by Roger Fisher and William Ury, 2nd Edition with Bruce Patton, all of the Harvard Negotiation Project.  Years ago, while a graduate student at Iowa, I was "reading" through the contents of the Law School Bookstore and found the first edition of Getting to Yes which I purchased on the spot.  This also might be a helpful reference tool for group discussions.  There are now a whole series of "Getting to..." books, most published, I believe, by Penguin. 

Michael Jackson, CEO, Shaping Tomorrow
Indeed we have been developing some of these traits at Shaping Tomorrow (ST).
We have advanced to the point where we are now capturing human behaviours from the activities of our members and to match these with what the AI knows.
For instance, clients add their purpose, vision, values, goals, objectives, motivations etc and the AI then has a greater knowledge of the client's needs. It now uses both to offer gaps in the client’s strategy in minutes.
We are experimenting with the AI using the same data to offer more refined scanning and extracting client data from their websites.
Each is a way to mix AI/human strategic foresight and responses in background with limited human input.
So I believe that the lines between robot's and humans will blur to seamless advantage for both.
I intend to take your list of traits and try to assess where our AI is on each and where more low hanging fruit exists
Currently our AI is built using NLP and keyword listings of what Collapse, Weak signals mean etc. that works well but it still requires us to train the machine and edit manually. We have dozens of these listings to manage in our dictionaries.
But more recently we have been using Machine Learning to take what the machine knows about a topic and to spot the embedded patterns. The AI can then determine which scan hits to accept rather than be curated by a human editor. That saves a lot of time, improves accuracy etc.
We were looking to then extend ML to our listings. But, we have run into a roadblock. The ML works fine on listings like collapse where it can spot patterns. It does not work so well on weak signals where there is no pattern.
About 50 percent of our listings work well but the others do not because we are not looking for patterns. An interesting roadblock as we knock up against the limitations of AI though we can still fall back on our existing manual methods. We have not yet found an ML that can deal with non-patterned forecasts.

John Freedman, Global Studies Scholar & Lecturer 
My recurring thought on reviewing the list of HI functions is this:
Although it may relate to other functions (Purpose, Vision, Imagination), the hallmark evolutionary attribute associated with the human capacity for abstract thought is the ability to displace oneself in time – to envision/frame a future scenario, and manufacture tools to achieve an imagined goal in the future. This is the difference between human tool-MANUFACTURING and widespread animal tool-MAKING (crows for example are superb tool-makers). Our greatest entrepreneurs do this and exemplify it at its best (e.g, Steve Jobs envisioned and then created the smartphone, Elon Musk same with the high-performing electric vehicle and reusable rocket, Jeff Bezos same with the hyper-customer-focused internet-based marketplace). Their process in its essence is fundamentally no different from Homo habilis envisioning its next encounter with a wooly mammoth and manufacturing tools at Olduvai to more quickly and safely turn the beast into a meal, clothing, shelter, and more tools. So I think that the capacity to ‘travel in time’ via ‘the mind’s eye’ is a – if not THE – key human attribute from a big-picture evolutionary standpoint . Thus, I'd propose that “FUTURE FRAMING” or a similar term might be incorporated more explicitly into the HI functions either as a stand-alone or perhaps under #15-Vision or #12-Imagination.
This is particularly important to consider in the ASI realm, and there is every reason to believe (based on inorganic versus organic system capabilities and limitations) that AI can far exceed HI (i.e., ASI will be developed which will far exceed AGI). If ASI can exceed the human capacity for imagining a future task and conceiving of and manufacturing a tool to achieve it, then it will have outdone us humans at our own evolutionary game. So it might be absolutely essential for genus Homo, now solely represented by Homo sapiens, to tightly control ASI or merge symbiotically with it in order to survive, and certainly to prevail.  
Fascinating, and I concur. Every precedent demonstrates that inorganic systems can far outperform organic systems. Think of speed, flight altitude, torque, brightness, anything you can measure. So intelligence likely is no different. Humans are likely very far from the top of the scale.

Dennis Bushnell, Chief Scientist, NASA Langley
The AI progress is clear, autonomous cars, planes, ships, submarines, spacecraft. Business leadership making decisions based upon AI Algorithms. Thaler used his imagination engine to create better munitions for the USAF and better toothpaste……The investments in AI are massive, and the technologies are developing nicely writ large for those investments to enable rapid progress. The deep learning/ neural nets/ big data approach dates from the 60’s, was not until around 2012 that the machine capability developed to enable serious applications. The machines are developing rapidly, including versions tailored for AI. China has said whoever is best in AI rules the world.

We humans have simply become too successful, are working ourselves out of a planet [ climate, ecosystem disasters] and out of a job [ AI], and along the way killing ourselves [ Pandemics]. There are many of the best of us who have written similar to the paragraph below, 

To address the request for the potential impacts of AI upon humans going forward requires a projection of  AI capabilities. There are now ways for the machines to ideate, create, even thus far in some cases better than humans. Going forward invention, creativity, innovation will not be the exclusive province of humans. Steve Thaler was the first to develop such machine capabilities and now the Generative Adversarial folks are working it. We are using this creative machine capability to enable trusted autonomy, to address unknown unknowns, which are currently why we still have human pilots. As the machines are interconnected, and massive data bases installed, this will  provide them common sense, they can then do the entire pyramid. Many realize this, why many books are written about possibilities for machine intelligence surpassing human. At the same time Musk, others are making great progress with brain/ machine communications and brain chips to augment brains. We are merging with the machines, but the cost and overhead of wet electrochemistry ( human physiology) will make even augmented humans noncompetitive with the machines. There are three approaches to create machine creativity, ideation and invention, all use the same basic approach. At machine speed generate large numbers of cogent, quasi random combinatorials with subsequent systems evaluation for specified problems/ metrics. Mirrors what our subconscious does but millions times faster and more completely, and is proving to be quite successful. The three approaches are genetic algorithms/Koza, Imagination Engine/Thaler, and  AI Generative Adversarial Algorithms. We, worldwide, now create wealth by inventing things, this is an alternative way forward to do that and be more successful. There are 3 ways forward for AI overall: 1] The way humans acquired intelligence, during the last million plus years we evolved enough brain components which became complicated enough such that we WOKE UP, the web is perhaps starting to wake up, 2] Current neural nets/ deep learning plus the generative adversarial for ideation, 3] Nano sectioning the neocortex, replicating it in silicon, billions invested in several efforts worldwide doing this, started with the IBM blue brain project in Switzerland. Therefore, the machines will be able to perform what most consider the highest level of intelligence, what the pundits have been saying will be the last bastion of human jobs, creativity, ideation, invention, which encompasses imagination and vision. Therefore, and there is an expanding literature on this, the machines can take essentially all the jobs going forward. Some of the best intellects have written books on AI exceeding human capabilities going forward and opined as to how along the way we might be able to instill in them human friendly values and purpose, a work in progress. So, whither humans? There are several possibilities….We are becoming cyborgs rapidly now, ear, eye, heart, organ implants, artificial limbs and brain chips. So, we are merging with the machines. Brain uploading into the machines has shifted from science fiction to being worked. Moravec long ago said we would explore the universe as our “Mind Children” [ uploaded humans]. The human engendered evolution of nearly everything is by some estimates a million times faster than natural evolution, including the ecosystem, the climate and  the human evolution of the humans. Whether there will be serious attempts to regulate AI development to ensure that humans are in charge is TBD but probably this will occur and many think will be perhaps too late. Also, the planet is flat technology wise per Friedman some 15 years ago now, thus planet wide regulations will not be easy to enforce. Overall, AI is developing very rapidly now. Bio, optical, quantum, nano, molecular, atomic computing  and a DOD projected networked global sensor grid with many trillions of land, sea, air and space sensors on nearly everything will enhance the speed and knowledgeability of the machines, enabling even more rapid AI development. It is perhaps past time to, as some have stated, determine with clarity what this blog is addressing and such forward actions as deemed required, desired.

Chris Garlick, Digital Infrastructure Leader at LFOD Integration Services 
Technology singularity can actually emulate human empathy, values, beliefs, and imagination because humans by our nature are creatures of habit and mostly learn by doing and are greatly affected by our environment.  Humans will not manage AI but rather it will become a collaborative partner and in the advancement of life, culture, and human experiences.  As our environment changes, humans will continue to evolve with technology in those new environments.   

When we reach technology singularity, our role, and how we interact with technology will be different and more collaborative.  Humans continue to evolve, and this has happened for thousands of years.  The question will be can human intelligence keep up with technology intelligence.

There is no limit to AI augmented the Human Intelligence hierarchy.   AI will use human cognitive skills to influence human imagination and help us see things we have never seen before.  AI will inherently develop solutions based on human intelligence believes and values as due to required human intelligence and inputs that will greatly affect the hierarchy.   
Margherita C. Abe, Physician
Interesting topic...I think that the hierarchy that you offer works. The big question is how far up into this pyramid may AI extend itself. AI may well duplicate HI brain function right now or at least very soon, maybe within the next decade.  How fast this proceeds may depend on what demands we place on AI via deep learning to force its growth as well as the nature of the tasks we challenge it to accomplish. Advances in AI global  brain function may lag that of specific task-related advances.  I'm sure that task related functioning is already within AI's grasp.
When I think about AI, I consider its ability to provide high level answers to difficult questions -- finding optimal solutions in a huge search space -- a crucial asset, as this article succinctly describes.  This article offers an example of one type of work done by AI, work that has seen enormous growth in AI's ability to evaluate huge amounts of detail and data and determine valid conclusions and results from it that advance a field.  This ability of AI does not depend on its advancing high on the pyramid, but it is an extremely useful and important aspect of AI development.  This is also an area where AI is actually already surpassing HI.  I'm thinking that there are going to be more areas like this over the next decade, areas where AI supersedes HI while it lags HI in the more emotive, empathic, and self-aware traits. How do we consider this type of uneven development as we compare AI and HI? 
Whether AI can go beyond global brain function as it advances up to higher levels in the pyramid relates to possible constraints on computer functioning.  As HI depends on cellular integrity and function, AI depends on its computer hardware and software to run it.  Possibly with full-fledged quantum computing AI may well rise high on the pyramid meeting HI's abilities in these areas, but I doubt that we will see this within the remainder of the 21st century. At present, how far along is AI 's development in these areas?  For a start, can AI currently present itself to a HI as if it is empathic?  I don't know but the YOUTUBE video attempting to demonstrate AI's empathic abilities does not convince me that this is occurring.  Thinking and evaluating this dialogue and the interaction as a former psychiatrist I found it lacking. This dialogue by itself does not convince me that the AI possesses awareness of the HI as a thinking feeling entity. Evaluating this dialogue does suggest to me that we may consider switching the positions of memory and awareness on the pyramid.  I would expect AI to achieve memory prior to achieving awareness.  In order to function in that dialogue AI would have to recall immediately what has gone before and consider that as part of its response to the next statement group in the interaction. This would be crucial if the interaction spanned several sessions separated in time.

Michael Lee, Futurist

For round two, in terms of framing the issues about AI versus HI, I would say a key question is what functions can AI and HI do together in collaboration which neither AI nor HI on their own can do? With Moore's Law now slowing down, as futurist Michio Kaku said it would, to show that Peak Computer is approaching, when the limits to growth of computing power push industry innovation in other directions, just as Peak Oil is pushing energy technologies in new directions, the question about Brain-Computer Interfaces (BCI), and how the two types of intelligence can be fused or co-harnessed, rises to the fore. I think an era of fusion between AI and HI is now inevitable and that the Intelligence Race between the two types of intelligence has ended in a draw, largely because both HI and AI are coming up against their physical limits. Only by working much closer together, can we keep Net Intelligence (the sum total of all available intelligence) increasing, rather than peaking. We need Net Intelligence to keep increasing for solving the complex problems of the world and for the conquest of space, in other words for our long-term survival as a civilization. I would propose that TechCast, on completion of this project, should oversee the drafting of a Human-Machine Interaction Protocol. I cannot think of a better group to oversee such an important document. It would be like a bill of rights for a new era of combined AI and HI when humans are no longer purely physical but become phygital, part physical part digital in nature. To some extent, Gen Z's are the first phygital generation.

Other questions arising in this context of an imminent Peak Computer, would be: 
What are the limits of HI? What are the limits of AI? What do we call the fusion of the two - Superintelligence (SI)? How would we define SI? What would SI achieve that neither HI nor AI can achieve? How can SI help us solve real world problems? How can SI help us colonize other planets? Can we create new kinds of work and industries with SI?

As you know, I believe there is a qualitative difference between AI and HI, namely that the former can never attain self-consciousness, self-awareness or autonomous being.  It is possible medically to transplant a head and central nervous system as an entity (as explored in Chrysalis) and the number of cyborgs will increase over time so I anticipate some hybrids arising between human and machine, especially for far-future space colonies.
Meanwhile, we need to tackle complexity here on earth and develop a higher level of consciousness, as you write about in your new book, and that will perhaps require SI and that protocol I mentioned for governing the future partnership between humans and machines and machine intelligence.
Ultimately, civilization is about harnessing intelligence to further goodness.
I hope this gives a sense of how we should frame the next round to incorporate implications around the slowing down of Moore's law and the end of exponential growth in computing power.

Clayton Rawlings, Partner, Hampton & Rawlings

It is a question of moving from being a sophisticated calculator, dressed up to appear 
as if it has intelligence, to actually acquiring General Artificial Intelligence. To my thinking, 
the key is having an AI that actually understands language. Scanning for similar words 
is not the same as understanding what you are looking for. A chat box may be clever
enough to mimic conversation, but it is a clever simulation, not actual conversation.

The second part is for AI to actually attain consciousness. If I can summarize some deep
thinkers, Kurzweil says it is an emergent property that springs from the complexity of the 
human mind. Michio Kaku says it is the result of feedback loops from sensors of space and time
that allow us to predict future events. If AI were to become self aware it would be "conscious" 
in the way human beings are conscious. At present I know of nothing that is anywhere near 
this threshold. Most current AI is extremely powerful in a narrow channel. I would not say it 
is impossible to create a conscious entity because we currently have 7.7 conscious entities 
that were created from a fertilized egg. A 16 cell zygote is not conscious but the DNA code
within does in fact contain the blueprint for consciousness that is built from scratch using chemicals.
While we have the wet ware to accomplish this, it has not translated into the hardware to do
this. Since we see it repeatedly occur in front of us, I would never say it is impossible. 
At the present, however, we are not very close to breaching this threshold.

I found a paragraph I wrote to introduce a story of mine titled "Unit 514"
that was republished in an anthology titled "Visions of the Future."
It was a short story concerning a conscious robot fighting to have his
"civil rights" recognized in a court of law. AI is AI whether in a desk top 
or onboard a robot.  See below:

"The idea of robotic consciousness is not far fetched when looking at Kaku’s human model. 
With the changing of a few words we have a crude definition of machine consciousness. 
“The process of creating a model of the world on your operating system using multiple 
feedback loops in various parameters [such as temperature, space and time] in order to accomplish
a goal [such as driving to a destination, exploring space or folding human laundry].” 
My apologies to Professor Kaku.

TechCast always encourages letters, comments and suggestions. 
TechCast at Breakfast Seminar with Phil and Tim

Bill Halal led a lively discussion on Global Consciousness at this seminar in Washington, DC, on Sept 11, 2020. Here’s what the hosts of this influential group said:

“Over the past four plus years Tim and I have been hosting these Friday mornings, we've never have had such involvement from our attendees........thoughtful questions, passionate answers, great dialogue."

AIA Foresight Signals Features Our Redesigning Capitalism Blog

AIA Foresight Signals, the newsletter put out by Tim Mack and Cindy Wagner to the futurist community recently summarized TechCast's work on Redesigning Capitalism.  We have also received very favorable compliments from several other futurists and business leaders. 

The AIA newsletter can be accessed here.


Cognis Group Cites TechCast on Forecasting the US Election
JessGarretson, CEO of The Cognis Group, has partnered with TechCast on a variety of projects. They recently summarized our study on the election.

See The Gognis Group for more.

TechCast Briefs Angel Investors

TechCast founder, William Halal, kicked off the annual meeting of the Angel Capital Association’s Virtual Summit May 12 with his keynote on The Technology Revolution.  Among his many points, Bill outlined how AI is causing today’s move beyond knowledge to an Age of Consciousness, and that business is now altering corporate consciousness to include the interests of all stakeholders. Angel investors are concerned about the social impacts of their companies, so this news was well received, especially as Bill stressed this historic change could be a competitive advantage.

Click here for the presentation


 TechCast at the Armed Forces Communication and Electronics Association

Halal also spoke at the annual AFCEA conference on the topic of AI, noting TechCast’s forecast that AI is expected to automate 30% of routine knowledge work about 2025 +3/-1 years and General AI is likely to arrive about 2040. Expanding on the same theme delivered at ACA, Bill explained how today’s shifting consciousness is likely to transform, not only business, but also government, the military and all other institutions.

We Invite Your Ideas
TechCast offers exciting new possibilities to use our unequaled talent and resources for creative projects. I invite you to send me your questions, fresh ideas, articles to publish, consulting work, research studies, or anything interesting on the tech revolution.
Email me at and I'll get back to you soon. Have your friends and colleagues sign up for this newsletter at

Thanks, Bill
William E. Halal, PhD  
The TechCast Project 
George Washington University

Bill's Blog is published by:

The TechCast Project

Prof. William E. Halal, Founder
George Washington University

Prof. Halal can be reached at

The TechCast Project is an academic think tank that pools empirical background information and the knowledge of high-tech CEOs, scientists and engineers, academics, consultants, futurists and other experts worldwide to forecast breakthroughs in all fields. Over 20 years of leading the field, we have been cited by the US National Academies, won awards, been featured in the Washington Post and other media, and consulted by corporations and governments around the world. TechCast and its wide range of experts  are available for consulting, speaking and training in all aspects of strategic foresight.
Elise Hughes, Editor

Copyright © 2020 The TechCast Project. All rights reserved.
Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.